Archive – Page 4

Episode 1: Why Ruby2JS is a Game Changer

Hey everybody, I’m so glad you could tune in for the debut episode of Fullstack Ruby. I’ve been on a few Ruby-themed podcasts over the past 18 months, but this is the first time I’m running a show about Ruby myself!

To kick things off, I’d like to introduce you to Ruby2JS and explain why I think this technology is a game changer.

Ruby2JS isn’t simply about an attempt to write what appears to be Ruby code for your website frontend. It’s really about writing JavaScript—AS IF JavaScript had Ruby’s syntax and was inspired by Ruby’s stdlib, ActiveSupport, and the like. A “RubyScript” if you will.

Three examples I cover on today’s episode:

Visit the Ruby2JS website for live compilation demos, documentation on the various transformations and approaches available, and a whole lot more.



The Rise of Fullstack Ruby & the Next Frontier of the Web

Credit: Johannes Groll on Unsplash

Welcome back to RUBY3.dev! Only…it’s not! Rather, a very warm welcome from Fullstack Ruby. Why the name change?

Well, a couple of reasons—the first of which is that your humble author (that’s me!) is not just a “Ruby developer” but a “web developer” as well. Yes, I’ll admit it: I don’t just write Ruby because I like assembling command line tools or crafting data processors or solving algorithmic puzzles. I like building websites. And I like building tools for building websites. I’m a web developer. It’s in my DNA.

So running a blog that’s generically about Ruby couldn’t hold my attention for too long. Thus I had to simultaneously narrow the focus all while expanding it to the broader web industry.

The second reason is that today, right now, right this very minute, is the absolute best time to be a fullstack Ruby/web developer. And tomorrow will be even better! Never have we had such a robust arsenal of tools at our disposal for building sites and apps that encompass both the backend and frontend in novel and exciting ways. Let us enumerate just what’s so great about the Ruby landscape at this juncture:

So that’s the primary goal of the Fullstack Ruby blog going forward: to talk at length and in depth about all of the above futuristic technologies. And not just here on the blog, but on a new podcast as well entitled—shocker I know—Fullstack Ruby. 😅 Keep an eye out for the first teaser episode in early December.

From Ruby-ist to Browser-ist #

So if that’s the primary goal, what’s the secondary goal? To help introduce backend-focused Rubyists to some of the exciting new browser developments they may not be familiar with. Advancements in CSS and JavaScript. New APIs. New client/server architectures. Something I’ve discovered in talking with various long-time Ruby developers is that some have thrown the baby out with the bathwater. By rightly eschewing the madness of JS frontend frameworks/tooling run amuck, they’ve also limited their knowledge of what is genuinely cutting-edge and useful on the frontend. For example, it’s fine if you opine “gee, heavy-duty React development seems like a PITA!” But if in the process you also ignore custom elements/shadow DOM, libraries like Lit, CSS variables, animations, and other techniques for building live, reactive frontend components, you’re cutting off your nose to spite your face. Not everything can fit cleanly into a Turbo/CableReady pipeline, or even a Stimulus controller. Sometimes, you just need to embrace “vanilla” JS & CSS. It’s OK. You can do it—and maintain your sanity! 😌

Ruby for JavaScript Developers #

Finally, our third goal here at Fullstack Ruby is to introduce JavaScript developers to Ruby. We can shout all day from the rooftops how much we love Ruby and think it’s expressive and delightful—plus MINASWAN and all that—but if a JS dev who’s written some APIs in Node Express and assembled some pages with Next.js has no idea what we’re talking about or why—or how it’s relevant to their career—the #Ruby #WebDev community won’t grow. It’s as simple as that. So let’s take a moment out of our day to respectfully showcase to our fellow JS devs what is so appealing about Ruby, about the ecosystem, and about the community. Not in a spirit of competition, but in a spirit of collaboration. We’re ultimately all in the same boat: building great websites and applications. A polyglot web is a stronger web, a better web.

So that’s my spiel. If you’re feeling pumped about all these topics, please sign up for our newsletter, follow us on Twitter, and let’s get this party started! 🎉



Teaching Ruby to Beginners? Trying New Gems or Techniques? Use Bridgetown!

As a core member of the Bridgetown project, I realize I’m biased. I think every Rubyist who works on or even near the web should take a look—especially anyone who has current or past experience using Jekyll. But today’s post isn’t about Bridgetown per se but about how the next big release, v0.21 “Broughton Beach” (currently in beta and due out in late May), provides an intriguing new environment for teaching and learning Ruby and trying out new tools in the Ruby ecosystem.

Ruby Ruby Everywhere #

One of the new features in Broughton Beach which is germane to this discussion is the ability to write web pages in pure Ruby. Previously, you could write a webpage in a template language such as Liquid, ERB, Haml, etc., similar to other Ruby frameworks like Rails.

Wait, I hear you say. Isn’t ERB just Ruby inside the <% %> delimiters?

Sure, it is. But you usually don’t see people writing an entire Ruby script in an ERB file. It’s mainly intended for first authoring the raw text of the template and then sprinkling bits of Ruby into it.

What’s changed in v0.21 is you can now add a page, or a layout, or a data file, using nothing more than .rb. Basically you can write any Ruby code you want, and the value returned at the end of the file becomes the content of the page. So you can build up web page markup using string concatenation, fancy DSLs, transformations of incoming data, the whole nine yards. And you can add methods and inner classes and anything else you need to accomplish your objective.

Demo Time #

Check out this sample repo on GitHub, along with the demo site here.

Feel free to fork the repo and take it for a spin! The only top-level files needed are the typical Gemfile/Gemfile.lock pair, and the bridgetown.config.yml file loaded by Bridgetown. Everything else goes in src. Let’s see what we have inside:

Is this the right way to build a Bridgetown site? 🤔 Well I certainly wouldn’t recommend shipping it to production! 😅 The point isn’t if you should use any of these techniques to build a website—rather that you can if you want to. (Just keep in mind that meme about scientists getting so preoccupied…)

Because you can, this becomes a compelling way to teach or to learn Ruby in the guise of building a website. Try out new techniques, new syntax, new parts of the standard library, new gems…the sky’s the limit! In the past, I might write one-off Ruby scripts and execute them on the command line, or maybe fiddle around in IRB. But now, with Bridgetown 0.21, I can actually maintain an experimental website full of pages which house various tips & tricks of Ruby programming I’ve picked up. Git init a repo, deploy it in mere minutes on Render, and we’re all set!

Further Experimentation #

Want to get really fancy? Add the method_source gem to your project, and then inside a Ruby page you can grab a string representation of a proc or a method in the page and use that to output the source code to the webpage itself. Mind blown! 🤯

Another thing you can do (even if your pages use traditional ERB or another template language) is use the src/_data folder to drop .rb files that could load in data from filesystems or APIs (or generate data directly) and do all kinds of interesting things to it before returning either an array or a hash which is then accessible via site.your_data_file_here (tack on .rows if an array).

My goal in creating Bridgetown was always to consider the “Ruby-ness” of the tool a feature, not a bug. (By contrast it’s progenitor, Jekyll, strangely doesn’t overtly spell out that it’s a Ruby tool built by Rubyists for Rubyists).

I’m very excited to see what crazy, experimental projects people will build using this new version of Bridgetown. Feel free to hop on over to our Discord chat room and let us know! small red gem symbolizing the Ruby language



Better OOP Through Lazily-Instantiated Memoized Dependencies

Credit: Nathan Van Egmond on Unsplash

As you sit down to write a new class in Ruby, you’re very likely going to be calling out to other objects (which in turn call out to other objects). Sometimes this is referred to as an object graph.

The outside objects created or required by a particular class in order for it to function broadly are called dependencies. There are various schools of thought around how best to define those dependencies. Let’s learn about the one I prefer to use the majority of the time. It takes advantage of three techniques Ruby provides for us: variable-like method calls, lazy instantiation, and memoization.

Let’s Get Object Oriented #

First of all, what do I mean by “variable-like method calls”? I mean that this:

thing.do_something(123)

could refer either to thing (a locally-scoped variable) or thing (a method of the current object). What’s groovy about this is when I instantiate thing, I can chose how to instantiate it. I could either set it up like this:

def some_method
  thing = Thing.new(:abc)
  thing.do_something(123)
end

or this:

def some_method
  thing.do_something(123)
end

def thing
  Thing.new(:abc)
end

The beauty of the second example is it makes thing available from more than one method—all while using the same initialization values. The problem with this example however is if I access thing more than once, it will create a new object instance.

def some_method
  thing.do_something(123)
  thing.finalize!
end

Oh no! The thing of the second line will be a different object than the thing of the first line! Yikes! Thankfully, we have a technique to fix that: “memoization via instance variable”.

Memoization is a technique used to cache the result of a potentially-expensive operation. In our particular case, we’re less concerned with performance-improving caching as we are with saving a unique value for reuse. We want the thing which gets used repeatedly to always refer to the same object. So let’s rewrite our thing method this way:

def thing
  @thing ||= Thing.new(:abc)
end

This code uses Ruby’s conditional assignment operator to either (a) return the value of the @thing instance variable, or (b) assign it and then return it. Now it’s assured we’ll never receive more than a single object instance of the Thing class. Let’s put it all together:

def some_method
  thing.do_something(123) # first call instantiates @thing
  thing.finalize! # second call uses the same @thing
end

def thing
  @thing ||= Thing.new(:abc)
end

What’s Lazy About This? #

Let’s take a look at what we might do if we weren’t using the above technique and we needed thing available across multiple methods. We might use an approach like this:

class ThingWrangler
  attr_reader :thing # create a read-only accessor method

  def initialize
    @thing = Thing.new(:abc) # create @thing when this object is created
  end

  def some_method
    thing.do_something(123)
    thing.finalize!
  end
end

Arguably this is an anti-pattern. Because if some_method never actually gets called, thing was instantiated for nothing—wasting memory and CPU resources. In addition, it makes swapping out the Thing class challenging in tests or subclasses because the Thing constant is hard-coded into the initialize method.

Some might recommend that you reach for the DI (Dependency Injection) pattern instead:

class ThingWrangler
  attr_reader :thing

  def initialize(thing:)
    @thing = thing
  end

  def some_method
    thing.do_something(123) # first call instantiates @thing
    thing.finalize! # second call uses the same @thing
  end
end

Then you’d simply need to pass an initialized object to the new method of ThingWrangler from a higher-level:

wrangler = ThingWrangler.new(thing: Thing.new(:important_value))
wrangler.some_method

Honestly, I really don’t like DI. It often makes for cumbersome APIs which are harder to comprehend as well as exposes implementation details to higher levels in situations where it might not even make sense. Do I really need to know that ThingWrangler doesn’t work without a Thing to rely on? Probably not. Contrast that with our friend the “lazily-instantiated memoized dependency” solution:

class ThingWrangler
  def initialize(value)
    @important_value = value # we store useful data for future use
  end

  def some_method
    thing.do_something(123) # first call instantiates @thing
    thing.finalize! # second call uses the same @thing
  end

  def thing
    @thing ||= Thing.new(@important_value) # aha! time to use saved data
  end
end

# This level doesn't need to know about the Thing class!
# It also doesn't cause any premature instantiation of @thing:
wrangler = ThingWrangler.new(:abc)

# NOW we call a method which in turn instantiates @thing:
wrangler.some_method

This is one of the solutions to writing “loosely-coupled” object-oriented code talked about in Sandi Metz’ book  Practical Object-Oriented Design in Ruby.

What’s great about this pattern is it affords you many opportunities for customization. For example, you can write a subclass which swaps Thing out entirely! Dig this:

class HugeThingWrangler < ThingWrangler
  def thing
    @thing ||= HugeThing.new(@important_value)
  end
end

wrangler = HugeThingWrangler.new(:abc)
wrangler.some_method # uses HugeThing under the hood

Or when testing ThingWrangler where you want Thing to be a mock object under your control, you could simply stub the thing method so it returns your mock instead of the usual Thing instance.

Or if you wanted to get real wild, here’s a bit of metaprogramming to add custom functionality around the original method:

ThingWrangler.class_eval do
  alias_method :__original_thing, :thing

  def thing
    puts "ThingWrangler#thing has been called!"
    obj = __original_thing
    puts "Now returning the thing object!"
    obj
  end
end

Now every time ThingWrangler accesses thing internally, your custom code will get run. (Careful out there!)

Some Important Caveats #

A memoized method shouldn’t be reliant on changing data, because its job is to return a single instance of Thing that gets cached and won’t ever change. So if you had code that looks like this:

def value_change(new_value)
  thing = Thing.new(new_value)
  thing.perform_work
end

You can’t memoize that instantiation, because you need a new Thing instance every time. However, what you could do instead is memoize the class itself! 🤯

def changing_values(new_value)
  thing = thing_klass.new(new_value)
  thing.perform_work
end

def thing_klass
  @thing_klass ||= Thing
end

This still provides many of the benefits of the techniques we’ve described in terms of allowing subclasses to alter functionality, mock objects in tests, etc. Depending on the needs of your API, you might even want to create a configuration DSL to allow that Thing constant to be officially customizable by consumers of your API. (And to reiterate, still no DI techniques required!)

One other caveat is if the original memoization method is overly complicated or reliant on internal implementation details, you could get into trouble with future subclasses.

class ParentClass
  def dependency
    @dependency ||= DependentClass.new(lots, of, input, values)
  end
end

class ChildClass < ParentClass
  def dependency
    # Hmm, what if the parent class changes internally and I don't?!
    @dependency ||= AnotherDependentClass.new(what, should, go, here)
  end
end

In fact, expensive custom logic typically isn’t compatible with the memoization technique as-is. Instead, a good pattern (if possible) to use for your dependency is simply to be given a reference to the calling object itself:

class ParentClass
  def dependency
    @dependency ||= DependentClass.new(self)
  end
end

class ChildClass < ParentClass
  def dependency
    @dependency ||= AnotherDependentClass.new(self)
  end
end

That way, it’s up to the dependency to glean any relevant data from the calling object in order to perform its work when required. This technique is used frequently across the Bridgetown project which I maintain.

For more on the benefits and caveats around memoization, read this article by “another” Jared (Norman). 😄

Conclusion: Trust Your LIM #

The Lazily-Instantiated Memoization technique is a powerful one and, when used appropriately and in a consistent fashion, it will help your objects become more modular and more easily customized and tested. Consider it whenever you need to manage dependencies within your Ruby code. small red gem symbolizing the Ruby language



Static Typing in Ruby 3 Gives Me a Headache (But I Could Grow to Like It)

Credit: Hans-Peter Gauster on Unsplash

I’ve had a doozy of a time writing this article. See here’s the thing: I’ve been a Ruby programmer for a long time (and a PHP programmer before that). My other main language exposure just before becoming a Rubyist was Objective-C. That did require putting type names before variable or method signatures, but also Objective-C featured a surprising amount of duck typing and dynamism as well (for better or worst…Swift tried to lock things down quite a bit more).

But then there’s JavaScript / TypeScript.

My relationship with JavaScript is…complicated, at best. I actually write quite a lot of JavaScript these days. Even more to the point, a lot of the JavaScript I write is in the form of TypeScript. I don’t hate JavaScript. The modern ESM environment is quite nice in certain ways. Certainly an improvement over jQuery spaghetti code and callback hell.

But TypeScript is simply a bridge too far for me. I use it because a project I’m on requires it, but I don’t enjoy it. At times I hate it so much I want to throw my computer across the room. However, I can’t deny its appeal in one respect: those Intellisense popups and autocompletes in VSCode are very nice, as well as the occasional boneheaded mistake it warns me about.

What does any of this have to do with Ruby? I’m getting there. Bear with me just a wee bit longer, I implore you!

Using TypeScript Without Writing TypeScript #

One interesting trend I’ve started to see as of late (at least on Twitter) is taking what’s cool about TypeScript type checking, Intellisense, and all the rest…but applying it in such as way that you’re not actually writing TypeScript, you’re writing JavaScript. What you do is use JSDoc code comments to add type hints to your file (but not as your actual code). Then use a special mode of TypeScript type checking which will parse the JSDoc comments and interpret them as if you’d written all your type hints inline as actual code. Here’s a fascinating article all about it.

If this is starting to sound just a wee bit familiar to you, O Rubyist, it should—because that’s exactly what it’s like using YARD + Solargraph with Ruby.

Improving the Ruby Editing Experience #

Right now, I’m in the middle of an extensive overhaul of the Bridgetown project to add YARD documentation comments to all classes and methods. With the Solargraph gem + VSCode plugin installed, I get extensive type descriptions and code completion with a minimal amount of effort. If I were to type:

resource.collection.site.config

It knows that:

And if I were to pass some arguments into a method, it would know what those arguments should be. And if I were to assign the return value of that method to a new variable, it would know what type (class) that variable is.

Livin’ the dream, right? But the one missing component of all this is strict type checking. Now the Solargraph gem actually comes with a type checking feature. But I’ve never used it, because I feel like if I were to go to the trouble of adding type checking to my Ruby workflow, I’d want something which sits a little closer to the language itself and is a recognized standard.

That’s where Ruby 3 + Sord comes in.

Ruby 3 + Sord = The Best of Both Worlds? #

Sord was originally developed to generate Sorbet type signature files from YARD comments. Sorbet is a type checking system developed by Stripe, and it does not use anything specific to Ruby 3 but is instead a custom DSL for defining types. However, Sord has recently been upgraded to support generation of RBS files (Ruby Signature). This means that instead of having to write all your Ruby 3 type signature files by hand (which are standalone—Ruby 3 doesn’t support inline typing in Ruby code itself), you can write YARD comments—just like with Solargraph—and autogenerate the signature files.

Once you have those in place, you use a tool called Steep, which is the official type checker “blessed” by the Ruby core team. Steep evaluates your code against your signature files and provides a printout of all the errors and warnings (similar to any other type checker, TypeScript and beyond).

So here’s my grand unifying theory of Ruby 3 type checking:

Nice theory, and extremely similar in overall concept to all the folks writing JavaScript yet using JSDoc to add “TypeScript” functionality in their code editors and test suites.

Unfortunately the reality is…not quite there yet. It kinda sorta works—with several asterisks. Hence the reason it took me so long to even write an article about Ruby 3 typing…and I’m not even sharing examples of how to do it but instead my thought process around why you’d want to do it and what the benefits are relative to all the hassles and headaches.

In my opinion, a type checking system for Ruby is useless unless it’s gradual. I want everything “unchecked” by default, and “opt-in” specific classes or even methods as we go along. While YARD + Solargraph alone gives you this experience, adding Sord + Steep into the mix does not. There doesn’t currently seem to be a way to say only generate type signatures for this file or that and only check this part of the class or that. At least I wasn’t able to find it.

In addition, all this setup is confusing as hell to beginners. There’s no way I can take someone’s fresh MacBook Air and install Ruby + VSCode + Solargraph + Sord + Steep (perhaps also Rubocop for linting) and get everything working perfectly with a minimum of headache and fuss. I myself have seen Solargraph and/or Rubocop support in VSCode break several times for unclear reasons, and it’s been a PITA to fix.

So here’s my crazy and wacky proposal: This should all be one tool. 🤯 I want to sit down at a computer, install Ruby + AwesomeRubyTypingTool, and it all just works. That’s the real dream here. I mean, TypeScript is TypeScript. It’s not a bunch of random JS libraries you have to manually cobble together into some kind of coherent system. TypeScript—for all its gotchas and flaws—is a known quantity. You might even say it just works—at least in VSCode. (No surprise there: both VSCode and TypeScript are Microsoft-sponsored projects.)

I have no idea what it would take for the Ruby core team and the other folks out there building these various tools to get together and hash this all out. But I really hope this story gets a hell of a lot better over the coming months. Because if not…I might just kiss Ruby 3 typing goodbye.

But not Solargraph. You’d have to pry that out of my cold dead hands. 😆 small red gem symbolizing the Ruby language



Ractors: Multi-Core Parallel Processing Comes to Ruby 3

Credit: Mukund Nair on Unsplash

For the longest time, I’ve wanted to be able to do a very simple thing in Ruby.

I’ve wanted to be able to run a block of expensive code multiple times in parallel and see all my CPU cores light up. ✨

This was very hard to do before! While Ruby does support multi-threaded code, only one thread at a time can be actively executing instructions (due to the Global Interpreter Lock, or GIL). That’s fine for apps that are often waiting on external I/O and so forth, but it doesn’t help you much if all your app is primarily concerned with is internal data processing. Historically, the only way you could truly achieve async parallelism in Ruby would be to fork multiple processes or schedule background jobs.

Until now.

Welcome to Ractor, a brand-new method of running async code in Ruby 3.

OK, Ractor sounds cool. But what is it? #

Ractor is an experimental new class in the Ruby corelib. With ractors, Ruby has for the first time lifted restrictions on the GIL. Now you can have multiple “RILs” if you will—aka one interpreter lock per ractor (and shared between multiple threads within a single ractor if you spawn threads).

Ractor is shorthand for “Ruby actor”. The actor concept has long been established in other languages such as Elixr to handle concurrency concerns. Essentially an actor is a unit of code that executes asynchronously and uses message passing to send and receive data from the main codepath or even other actors. For more on the history and conceptual thinking behind Ruby actors, read this Scout APM blog post by Kumar Harsh.

There are a variety of patterns at your disposal when using ractors, some of which are explained in the extensive Ractor documentation.

I’m very impressed by how simple it is to program with ractors. I’ve tried to work with Threads or gems in the past that aid with async development, and it’s always made my brain hurt with little to show for my efforts. Using the Ractor class is about as easy as I could possibly imagine (short of a one-line async keyword).

The other thing I’m impressed by is how straightforward it is to get deterministic, ordered output from multiple ractors. In the past if I tried to use threads to process data and add the outputs to an array, the array values would be out of order. If thread 1 finished after thread 2, the final array would be in 2, 1 order. With the ractors.map(&:take) pattern, you’re guaranteed that even if one ractor takes 2 seconds to process and another takes 6, you’ll still end up with an array of values in the same order in which you started up the ractors.

Example Time! #

I wanted to create the most basic example of ractors I could think of that would also be an interesting sort of benchmark comparing to typical, synchronous Ruby code.

Here’s a script that spins up 20 ractors which perform some intensive data processing and return an output value, and the final script output is a joined array of all the ractor outputs.

require "benchmark"

ractors = []
values = []

puts "Starting Ractor processing"

time_elapsed = Benchmark.measure do
  20.times do |i|
    ractors << Ractor.new(i) do |i|
      puts "In Ractor #{i}"
      5_000_000.times do |t|
        str = "#{t}"; str = str.upcase + str;
      end
      puts "Finished Ractor #{i}"
      "i: #{i}" # implicit return value, or use Ractor.yield
    end
  end

  values = ractors.map(&:take)
end

# avg: 22 seconds, 1.6x performance over not_ractors
puts "End of processing, time elapsed: #{time_elapsed.real}"

# deterministic output. nice!
puts values.join(", ")

As you can see, using the Ractor class can be nearly as easy as working with standard lambdas. You don’t have to spend much mental overhead working through any additional data structures, scheduling, or thread concepts like mutexes. It “just works”.

And not only that, but it’s noticeably faster than a non-Ractor-based script:

require "benchmark"

values = []

puts "Starting Not-Ractor processing"

time_elapsed = Benchmark.measure do
  20.times do |i|
    puts "In Not-Ractor #{i}"
    5_000_000.times do |t|
      str = "#{t}"; str = str.upcase + str;
    end
    puts "Finished Not-Ractor #{i}"
    values << "i: #{i}"
  end
end

# 34.5 seconds, fans spun up !!!
puts "End of processing, time elapsed: #{time_elapsed.real}"

puts values.join(", ")

After a number of runs of both scripts on my tricked-out 16” MacBook Pro, the ractors exhibited a 1.6x performance increase. I’ve heard reports of other tests where converting Ruby code to use ractors resulted in 3x performance increases.

It’s very exciting to run a Ruby script and see every CPU light up in Activity Monitor, plus I noticed the single-core script made my fans spin up whereas the multi-core script kept my fans nearly inaudible.

Caveats #

As cool as ractors are, you can’t just flip a switch and Ractor all the things (!). There are a number of limitations around how sharing objects and passing them back and forth via messages works—limitations that make sense considering we’re now bypassing the GIL. So it really does require a whole new level of thinking around how you structure your objects, methods, and data structures in general (particularly objects which are “global” in nature). As an example of something I’m hoping to work on soon, I recently started a rewrite of the content pipeline in Bridgetown (a static site generator). When Bridgetown is processing a site, there are a number of shared objects in memory—most notably, a site object and a series of collection objects. Typically, when a particular page/post/etc. is getting loaded, it adds itself to the necessary arrays in the site or the collection. With ractors, you can’t do that! Multiple concurrent ractors running in parallel can’t be modifying shared state directly. Instead, you’d have to separate the whole process out into multiple stages: gather the metadata required to load the page, then spin up ractors to perform all the loading logic, and then use message passing to gather up the loaded pages from the ractors and add them to the shared objects.

That’s the theory anyway. I’ll have to report back (a) if it works, and (b) if it’s a performance improvement over the regular synchronous code. But the promise is there: by architecting your app or gem around the ractor concept, your Ruby code gains the ability to shuttle intensive operations off to all your CPU cores at once—potentially yielding monumental performance increases.

Conclusion #

Since Ruby 3 is so new and Ractor itself is marked experimental, I think it will take some time for the Ruby ecosystem at large to evolve into this exciting new direction. And it may take a few point releases for esoteric ractor bugs or gotchas to get resolved. But I have no doubt this will happen. The rewards are too tantalizing to be left on the table for long. Finally, we can look at other languages like Elixir or Go and, instead of sighing wistfully at how easy might be to write concurrent code, we can roll up our sleeves, fire up some ractors, and watch those CPU cores light up. small red gem symbolizing the Ruby language



Everything You Need to Know About Destructuring in Ruby 3

Credit: Kiwihug on Unsplash

Welcome to our first article in a series all about the exciting new features in Ruby 3! Today we’re going to look how improved pattern matching and rightward assignment make it possible to “destructure” hashes and arrays in Ruby 3—much like how you’d accomplish it in, say, JavaScript—and some of the ways it goes far beyond even what you might expect. December 2021: now updated for Ruby 3.1 — see below!

First, a primer: destructuring arrays #

For the longest time Ruby has had solid destructuring support for arrays. For example:

a, b, *rest = [1, 2, 3, 4, 5]
# a == 1, b == 2, rest == [3, 4, 5]

So that’s pretty groovy. However, you haven’t been able to use a similar syntax for hashes. This doesn’t work unfortunately:

{a, b, *rest} = {a: 1, b: 2, c: 3, d: 4}
# syntax errors galore! :(

Now there’s a method for Hash called values_at which you could use to pluck keys out of a hash and return in an array which you could then destructure:

a, b = {a: 1, b: 2, c: 3}.values_at(:a, :b)

But that feels kind of clunky, y’know? Not very Ruby-like.

So let’s see what we can do now in Ruby 3!

Introducing rightward assignment #

In Ruby 3 we now have a “rightward assignment” operator. This flips the script and lets you write an expression before assigning it to a variable. So instead of x = :y, you can write :y => x. (Yay for the hashrocket resurgence!)

What’s so cool about this is the smart folks working on Ruby 3 realized that they could use the same rightward assignment operator for pattern matching as well. Pattern matching was introduced in Ruby 2.7 and lets you write conditional logic to find and extract variables from complex objects. Now we can do that in the context of assignment!

Let’s write a simple method to try this out. We’ll be bringing our A game today, so let’s call it a_game:

def a_game(hsh)
  hsh => {a:}
  puts "`a` is #{a}, of type #{a.class}"
end

Now we can pass some hashes along and see what happens!

a_game({a: 99})

# `a` is 99, of type Integer

a_game({a: "asdf"})

# `a` is asdf, of type String

But what happens when we pass a hash that doesn’t contain the “a” key?

a_game({b: "bee"})

# NoMatchingPatternError ({:b=>"bee"})

Darn, we get a runtime error. Now maybe that’s what you want if your code would break horribly with a missing hash key. But if you prefer to fail gracefully, rescue comes to the rescue. You can rescue at the method level, but more likely you’d want to rescue at the statement level. Let’s fix our method:

def a_game(hsh)
  hsh => {a:} rescue nil
  puts "`a` is #{a}, of type #{a.class}"
end

And try it again:

a_game({b: "bee"})

# `a` is , of type NilClass

Now that you have a nil value, you can write defensive code to work around the missing data.

What about all the **rest? #

Looking back at our original array destructuring example, we were able to get an array of all the values besides the first ones we pulled out as variables. Wouldn’t it be cool if we could do that with hashes too? Well now we can!

{a: 1, b: 2, c: 3, d: 4} => {a:, b:, **rest}

# a == 1, b == 2, rest == {:c=>3, :d=>4}

But wait, there’s more! Rightward assignment and pattern matching actually works with arrays as well! We can replicate our original example like so:

[1, 2, 3, 4, 5] => [a, b, *rest]

# a == 1, b == 2, rest == [3, 4, 5]

In addition, we can do some crazy stuff like pull out array slices before and after certain values:

[-1, 0, 1, 2, 3] => [*left, 1, 2, *right]

# left == [-1, 0], right == [3]

Rightward assignment within pattern matching 🤯 #

Ready to go all Inception now?!

freaky folding city GIF

You can use rightward assignment techniques within a pattern matching expression to pull out disparate values from an array. In other words, you can pull out everything up to a particular type, grab that type’s value, and then pull out everything after that.

You do this by specifying the type (class name) in the pattern and using => to assign anything of that type to the variable. You can also put types in without rightward assignment to “skip over” those and move on to the next match.

Take a gander at these examples:

[1, 2, "ha", 4, 5] => [*left, String => ha, *right]

# left == [1, 2], ha == "ha", right == [4, 5]

[8, "yo", 12, 14, 16] => [*left, String => yo, Integer, Integer => fourteen, *
right]

# left == [8], yo == "yo", fourteen == 14, right == [16]

Powerful stuff!

And the pièce de résistance: the pin operator #

What if you don’t want to hardcode a value in a pattern but have it come from somewhere else? After all, you can’t put existing variables in patterns directly:

int = 1

[-1, 0, 1, 2, 3] => [*left, int, *right]

# left == [], int == -1 …wait wut?!

But in fact you can! You just need to use the pin operator ^. Let’s try this again!

int = 1

[-1, 0, 1, 2, 3] => [*left, ^int, *right]

# left == [-1, 0], right == [2, 3]

You can even use ^ to match variables previously assigned in the same pattern. Yeah, it’s nuts. Check out this example from the Ruby docs:

jane = {school: 'high', schools: [{id: 1, level: 'middle'}, {id: 2, level: 'high'}]}

jane => {school:, schools: [*, {id:, level: ^school}]}

# id == 2

In case you didn’t follow that mind-bendy syntax, it first assigns the value of school (in this case, "high"), then it finds the hash within the schools array where level matches school. The id value is then assigned from that hash, in this case, 2.

So this is all amazingly powerful stuff. Of course you can use pattern matching in conditional logic such as case which is what all the original Ruby 2.7 examples showed, but I tend to think rightward assignment is even more useful for a wide variety of scenarios.

“Restructuring” for hashes and keyword arguments in Ruby 3.1 #

New with the release of Ruby 3.1 is the ability to use a short-hand syntax to avoid repetition in hash literals or when calling keyword arguments.

First, let’s see this in action for hashes:

a = 1
b = 2
hsh = {a:, b:}

hsh[:a] # 1
hsh[:b] # 2

What’s going on here is that {a:} is shorthand for {a: a}. For the sake of comparison, JavaScript provides the same feature this way: const a = 1; const obj = {a}.

I like {a:} because it’s a mirror image of the hash destructuring feature we discussed above. Let’s round-trip-it!

hsh1 = {xyz: 123}

hsh1 => {xyz:}

# now local variable `xyz` equals `123`

hsh2 = {xyz:}

# hsh2 now equals `{:xyz=>123}`

Better yet, this new syntax doesn’t just work for hash literals. It also works for keyword arguments when calling methods!

def say_hello(first_name:)
  puts "Hello #{first_name}!"
end

# elsewhere…

first_name = "Jared"

say_hello(first_name:)

# Hello Jared!

Prior to Ruby 3.1, you would have needed to write say_hello(first_name: first_name). Now you can DRY up your method calls!

Another goodie: the values you’re passing via a hash literal or keyword arguments don’t have to be merely local variables. They can be method calls themselves. It even works with method_missing!

class MissMe
  def print_message
    miss_you(dear:)
  end

  def miss_you(dear:)
    puts "I miss you, #{dear} :'("
  end

  def method_missing(*args)
    if args[0] == :dear
      "my dear"
    else
      super
    end
  end
end

MissMe.new.print_message

# I miss you, my dear :'(

What’s happening here is we’re instantiating a new MissMe object and calling print_message. That method in turn calls miss_you which actually prints out the message. But wait, where is dear actually being defined?! print_message certainly isn’t defining that before calling miss_me. Instead, what’s actually happening is the reference to dear in print_message is triggering method_missing. That in turn supplies the return value of "my dear".

Now this all may seem quite magical, but it would have worked virtually the same way in Ruby 3.0 and prior—only you would have had to write miss_you(dear: dear) inside of print_message. Is dear: dear any clearer? I don’t think so.

In summary, the new short-hand hash literals/keyword arguments in Ruby 3.1 feels like we’ve come full circle in making both those language features a lot more ergonomic and—dare I say it—modern.

Conclusion #

While you might not be able to take advantage of all this flexibility if you’re not yet able to upgrade your codebase to v3 of Ruby, it’s one of those features I feel you’ll keenly miss after you’ve gotten a taste of it, just like keyword arguments when they were first released. I hope you enjoyed this deep dive into rightward assignment and pattern matching! Stay tuned for further examples of rightward assignment and how they improve the readability of Ruby templates. small red gem symbolizing the Ruby language



Ruby on the Frontend? Choose Your Weapon

Credit: Meritt Thomas on Unsplash

We all know that Ruby is a great language to use for the backend of your web application, but did you know you can write Ruby code for the frontend as well?

Not only that, but there are two available options to choose from when looking to “transpile” from Ruby to Javascript. These are:

Let’s take a quick peek at each one and see what might be right for your project.

Ruby2JS #

My personal favorite, Ruby2JS was created by Sam Ruby (yep, that’s his name), and it is intended to convert Ruby-like syntax to Javascript as cleanly and “natively” as possible. This means that (most of the time) you’ll get a line-by-line, 1:1 correlation between your source code and the JS output. For example:

class MyClass
  def my_method(str)
    ret = "Nice #{str} you got there!"
    ret.upcase()
  end
end

will get converted to:

class MyClass {
  myMethod(str) {
    let ret = `Nice ${str} you got there!`;
    return ret.toUpperCase()
  }
}

There’s actually a lot going on here so let me unpack it for you:

How do you get started using Ruby2JS? It’s pretty simple: if you’re using a framework with Webpack support (Rails, Bridgetown), you can add the rb2js-loader plugin along with the ruby2js gem, write some frontend files with a .js.rb extension, and import those right into your JS bundle. It even supports source maps right out of the box so if you have any errors, you can see the original Ruby source code right in your browser’s dev inspector!

Full disclosure: I recently joined the Ruby2JS team and built the Webpack loader, so let me know if you run into any issues and I’ll be glad to help!

Opal #

The Opal project was founded by Adam Beynon in 2012 with the ambitious goal of implementing a nearly-full-featured Ruby runtime in Javascript, and since then it has grown to support an amazing number of projects, frameworks, and use cases.

There are plenty of scenarios where you can take pretty sophisticated Ruby code, port it over to Opal as-is, and it just compiles and runs either via Node or in the browser which is pretty impressive.

Because Opal implements a Ruby runtime in Javascript, it adds many additional methods to native JS objects (strings, integers, etc.) using a $ prefix for use within Opal code. Classes are also implemented via primitives defined within Opal’s runtime layer. All this means that the final JS output can sometimes look a little closer to bytecode than traditional JS scripts.

For instance, the above example compiled via Opal would result in:

/* Generated by Opal 1.0.3 */
(function(Opal) {
  var self = Opal.top, $nesting = [], nil = Opal.nil, $$$ = Opal.const_get_qualified, $$ = Opal.const_get_relative, $breaker = Opal.breaker, $slice = Opal.slice, $klass = Opal.klass;

  Opal.add_stubs(['$upcase']);
  return (function($base, $super, $parent_nesting) {
    var self = $klass($base, $super, 'MyClass');

    var $nesting = [self].concat($parent_nesting), $MyClass_my_method$1;

    return (Opal.def(self, '$my_method', $MyClass_my_method$1 = function $$my_method(str) {
      var self = this, ret = nil;

      
      ret = "" + "Nice " + (str) + " you got there!";
      return ret.$upcase();
    }, $MyClass_my_method$1.$$arity = 1), nil) && 'my_method'
  })($nesting[0], null, $nesting)
})(Opal);

Thankfully, Opal too has support for source maps so you rarely need to look at anything like the above in day-to-day development—instead, your errors and debug output will reference clean Ruby source code in the dev inspector.

One of the more well-known frameworks using Opal is Hyperstack. Built on top of both Opal and React, Hyperstack lets you write “isomorphic” code that can run on both the server and the client, and you can reason about your web app using a well-defined component architecture and Ruby DSL.

Conclusion #

As you look at the requirements for your project, you can decide whether Ruby2JS or Opal might suit your needs.

Regardless of which you choose, it’s exciting to know that we can apply our Ruby knowledge to the frontend as well as the backend for web applications large and small. It’s a great day to be a Rubyist. small red gem symbolizing the Ruby language

Newer Posts Older Posts
Skip to content