The debate between using mixins or extends in Sass has been heating up recently. From the surface it appears they both provide solid benefits in helping us write consistent and modular code. However, the critics are out and extends specifically have come under fire. But why?
(If you’re unsure what mixins and extends are, or the Sass preprocessor, I’d recommend researching them before before continuing here.)
There are a handful of arguments against extends, the most popular include:
- Extends don’t allow arguments
- Extends don’t work within media queries
- Extends, when nested, can extend unintended code
- Extends group the compiled CSS based on reused declarations, not on how the code is contextually written
Fair enough, those are good arguments, and mixins will solve for every one of them. That said, extends won’t duplicate our declarations where mixins will. And with mixins, that can be an enormous amount of duplication. Extends, on the other hand, will duplicate our selectors, combine them with other selectors, and rearrange them so they appear with the necessary declarations.
So this begs the question: if mixins and extends both have some amount of duplication, which is best?
A few weeks back I tweeted a bit of CSS I’ve been using frequently and was quite surprised by the response, thus I wanted to share it here too. What was the bit of CSS? Simply enough, it was the following selector:
Often when working with lists we want to apply a specific style to all of the
li elements in a list except for one, most commonly the first or last
li element. In the past we’ve been known to write the following CSS:
border-bottom: 1px solid #ccc;
The problem here is that we’ve defined styles only to overwrite them later by way of a higher specificity selector, in this case by adding the
:last-child pseudo-class. In doing so we create extra work for the browser and slow down the rendering of these styles. Continue…
We are investing a lot of research and development time into leveraging Docker in the next generation of our internal infrastructure. One of the next components we need to build out to full maturity is being able to dynamically route web traffic from our Nginx load balancers to internal Docker containers in a performant way.
We are very passionate fans of the work of HashiCorp at Belly, and they recently published a new project named Consul Template. We were using an earlier HashiCorp tool named consul–haproxy to reconfigure our Nginx load–balancers based on Consul data. Consul Template is a slightly more generalized tool that was fairly smooth to adopt.
Let me walk you through a proof of concept I whipped up last week. Starting from an OSX computer with Homebrew and VirtualBox installed, we will be able to spin up a Docker–based environment that will load–balance HTTP traffic via Nginx to an arbitrary number of backend processes, all running in separate Docker containers. Continue…
Docker is neat. If you don’t know it yet, you should read this now.
How we build
We’re using a cluster of machines specifically setup to build Docker images. The build happens by cloning a repo, then sending it to Docker to build. This leads us to a problem.
When a git repo is cloned, the mtime (modification timestamp) on the filesystem is the time of the clone, not the friendly time that you see in Github (which is based on the last commit). Docker uses the file’s mtime to determine whether or not something has been changed, so NOTHING of yours gets cached in Docker’s slices. There’s a solution, however…
How many times have you had this nightmare:
“Hey, something’s broken. We need it fixed.”
“… but I’m working on something already,” you reply.
“Haha, hilarious. Developers are funny people with social influence.”
“Just kidding. Fix this right now.”
At Belly we make heavy use of Napa for all things API. Napa is our in–house, open source framework that makes it easy to create REST APIs in Ruby, and it forms the backbone of our application services layer.
Why we love Napa:
- Productivity: Being a Ruby framework, Napa bundles Rubyland mainstays like ActiveRecord and RSpec with built–in code generators to quickly get a service up and running. It is easy to configure, and quick to learn for newcomers.
- Predictable: Napa is our de facto standard. We use it across our services layer, so it is easy to jump from one service to another without context switching.
Being built on Ruby, Napa is easy to learn and incredibly productive. Though like other Ruby–based projects, it does inherit some of Ruby’s warts: lackluster performance when parsing large objects or manipulating large data, and arguably more annoying, its reliance on run time exceptions instead of compiler errors for things like type errors, arity mismatches, or unwittingly referencing
nil values instead of objects.
In practical terms these equate to slow response times for service consumers and wasted developer time tracking down pesky bugs.
What if we could take the things we love about Napa and Ruby — speed of development, active community, project predictability and standardization — and supplement them with snappy run time performance and a type system that augments our automated testing to reduce bugs in production? Continue…