Tech Industry

A “Free Hand” At the Bagel Shop (and, On Software Project Estimation)

There’s a bagel shop I go to where the staff does something uneventful but effective after they take my order.

Once written down, the cashier yells out “free hand.” The first time I heard it I wasn’t sure if I was missing something, if she was talking to me perhaps, or if there were a secret language of bagel staff employees that I didn’t know about.

At this small, family-owned bagel & eggs deli in the Brooklyn neighborhood where I live (La Bagel Delightshown here), I will get a bacon, egg & cheese on a plain bagel. This event — predictable— isn’t something you’d normally notice or mention. But this bit about them saying “free hand” reminds me of scrum software.

Picture 3 bagel workers and a line of 10 customers. Each order takes maybe 1 minute to 4 minutes, depending on whether it involves toasting, eggs, or other preparation. In this bagel shop, no worker is specialized— that is, there’s no one dedicated person to one kind of sandwich preparation. All of the workers take each new job indiscriminately. (We might call this a “homogenous” team, and no, we’re not talking about them being gay.)

You’d think maybe the workers could “double-up” the jobs: take 2 or 3 jobs and run them concurrently. For example, a customer with many sandwiches in an order. It could take upwards of several minutes to complete this order.

In the meantime, if the worker has a lot of extra time while they are waiting for the bread to toast, they might come back to the queue to take the next job. In this scenario— I find myself empathizing— I would imagine the worker must make his own queue — that is, a queue within a queue.

His queue is: 1) the first order which is toasting (last time we checked), and 2) the new order he just picked up. Interestingly, like software development in some ways, I imagine preparing bagels (toasting, buttering, making eggs, slicing cheese, layering with topping, etc) is similar: The worker must manage several wait states — times when he or she must stop and wait for something or someone else. In both arenas, the length of time it takes to complete some parts of the task will be a fixed length of time (like the time it takes to toast), other parts will take a length of time proportional to the size of the request, and still other parts will take an unusual, unexpectedly long time (a “snag”—like, what happens when the egg salad is not made?) If the egg salad isn’t made, the customer might have to wait up to 10 minutes for the egg salad to be made. Who’s gonna make this egg salad? Put a pin in that and I’ll get back to that question in a second. Other than the fixed costs and the unusually long costs (both can create wait states) what else is there?

Excluding the unknowns (those wait states and unusual unexpected snags), one can reasonably say the length of time or level of effort to create a bagel (or piece of software) will be proportional to the size of the request. (Number of toppings or features.)

In software, we face these kinds of things too. The build time for a compiled program, for example, can be thought of as fixed (typically). For a software process with code tests, the length of time to run the tests (like continuous integration) can be thought of as fixed time costs too.

Hopefully, your software development doesn’t have unusually long costs— and I’ll bet the guy working at the bagel shop doesn’t want those either. You see, the egg salad not being made is analogous to the comps or design specs not being prepared for a highly visual UX. Or worse, the design specs being made but the feature set being ill-defined.

When a developer has a “free hand,” that means they have time and attention to pay attention to your problem or the next problem on the backlog. That problem — which ideally should be thought of as the company’s problem— should be ready-to-go (without blockers, back & forth, etc). This way, the story (software development) can move through the queue as quickly as at a bagel shop like La Bagel Delight.

That’s why a good bagel store manager and a good software product manager remove blockers. The bagel store manager notices when the egg salad is low and makes more. The product manager foresees the blocker the software developer will have and removes it before it becomes a blocker.

“Free hand” is what the cashier calls out to ask if there is an available resource to take the next job. It’s a signal of the establishment’s demand and of the queue moving.

It turns out, that while it’s easy to suspect that like in the bagel shop, a ‘free hand’ can take 2 or 3 jobs at once, this is often a pitfall.

Why? Think of the whole system as a machine. If each cog has to manage its own internal queue of wait states, you will create a lot of task switching.

Task switching is your enemy!

Having lots of “single queued” developers who switch tasks all day long is, fundamentally, anti-scrum. By doing this, the product manager creates muri (Japanese for “waste of overburden”).

An efficient system is the only way to scale up. In both software development and a small food store, we see common elements:

  • Jobs come in one by one (or sometimes many orders from one individual)
  • Each job takes a varying amount of time to complete

Although it is tempting to manage this scrum-within-scrum, it is the path to hell. The reason for this is more obvious in software than it is at the bagel shop: there’s no good reason for long wait states. It’s best when a story isn’t ready for the developers to put the story back onto the backlog (or someone else’s backlog) and move onto the next one.

If you take one thing away from this article it is that you should reduce muda (Japanse for “value-reducing waste”) by eliminating blockers as quickly as possible. In this way, you will bring the work back into the main branch (in Git terminology) in a quick, iterative fashion. Ideally, your development work is deployed to production as iteratively and quickly as possible, too. That’s how you know you have a clear definition of done.

Remember, always have a quick stand-up at regular intervals. Stand-up is always about 1) What you accomplished yesterday, 2) What you’re working on today, and 3) Anything that is blocking you. Most importantly, a ritual stand-up is when blockers can be removed. (Remember, stand-up is not a management meeting, which I wrote a post about last year.)

Be like the bagel shop and always look for the team’s greatest need: Is the egg salad low? Let’s make some egg salad. If not, maybe go to the front of the queue and be the ‘free hand’ that will take the next job in the queue.

Coaching Series Tech Industry

What is an antipattern?

What is an antipattern?

Glad you asked. An anti-pattern is a system, process, or pattern typically found in business or software development that isn’t good. In other words, it’s actually counter-productive or works against the actual goal. Another way to think of antipatterns: things that you see when groups of people work together where they repeat the same anti-productive and anti-useful activities again and again.

So, in other words, an “anti- pattern” is something you shouldn’t do.

But antipatterns are unavoidable. They exist because humans are flawed, and structure leads to human agita which, like water, will always look for the shortest way to flow down a mountain.

Antipatterns are literally everywhere in software development. A good example: think of a company culture of meetings. You have endless meetings in which people try to agree on a design for something and go around and around in circles for hours—- or even days or weeks. You think to yourself, “Why are we even doing this?” That’s an antipattern.

That particular antipattern has a specific name: design by committee.

Where do anti patterns come from?

The antipatterns come from the early decades of urban office culture in the American 50s. Some come from the hardware manufacturing cultures of the 70s and 80s (IBM and Hewlett-Packard). Many of them come from the 90s, and new ones have emerged in the age of web development.

You can think of antipatterns typically as business (human) antipatterns and software (code) antipatterns. Although helping companies identify and work with their human antipatterns is an important part of business development, in this series I will focus on coding antipatterns, particularly things that can be found in Ruby on Rails.

Rails Antipatterns

Rails comes with powerful tools. These powerful tools can be used to break down the principles of object-oriented programming: to create code that is strongly coupled, not encapsulated, poorly organized, or repeats itself repeats itself. So remember that with great power comes great responsibility.

It is said the road to hell is paved with good intentions. All good antipatterns start with a need of a developer. Typically, they exist because of either that developer’s (1) lazyness or (2) ignorance in seeing the larger picture.

Tech Industry

The One About the Chickens and The Pigs (aka What Stand-up Is and What Stand-up Isn’t)

There’s an old adage in scrum software development about chickens and pigs at stand-up. Chickens are product managers and pigs are developers.

You don’t hear it too often anymore, probably because these days it feels a little sexist. (It’s not lost on anyone anymore how gendered the roles of product manager and developer feel in most tech companies— the product people being women and the developers being men.)

It takes a leap of faith to understand what it means, and what even is the question it’s asking anyway.

The question is fairly basic: Who participates at your morning company or engineering stand-up?

That is, I mean, really: who speaks and who does not speak. I know it sounds rigid and those of us to talk about it get called a “scrum bullwhip” (a title I proudly wear). Pigs speak at stand-up. Chickens (product managers, CEOs, and stake-holders), if they come to stand-up (and generally only product managers should) aren’t supposed to speak unless spoken to.

What? It sounds like some kind of renaissance classism like people used to say: “children shouldn’t speak until spoken to,” but to understand the chicken & pig adage is to learn something core about scrum and the stand-up meeting itself.

  • Standup is about managing the work, not the people.

What the F does this have to do with chickens and pigs, you might be asking? (I warned you it’s a long way around with this one.) Well, the idea is that we’re making breakfast. We’re all making breakfast together.

The end result is the breakfast. How we get there matters, but not everybody’s contribution is equal.

The scrum process forces the engineers to prioritize working on the very most important thing first (hopefully, the one task they have assigned to them).

Most product people, stakeholders, and CEOs being unfamiliar with the concept of “stand-up,” incorrectly assume or treat engineering stand up as a “management meeting” and think it’s their opportunity to talk or get what they want.

Sadly, this is, in fact, the opposite of scrum. Instead, scrum is about aligning your engineering efforts with your organizational-wide goals.

These days many of the millennials, born of the gadget generation, have grown in jobs where they can hide their high-functioning adult ADHD (Attention Deficit Hyperactive Disorder).

A high-performing engineering team works in total contrast to this ADHD, attention-switching, always-on-call mentality: The thing to work on is the one thing right in front of you, never anything else.

If that thing that you’re working on isn’t the most important thing, then the CEO or product owners haven’t correctly prioritized the backlog. When product people and CEOs come to scrum and participate it’s like a group of people trying to make breakfast when some other people trying to plan for lunch or dinner or tomorrow’s meals. The appropriate response you’ll get from the developers is: “Hey, back off, we’re making breakfast now, come back when we’re done and we’ll talk about lunch.”

The chickens lay eggs. The pigs are slaughtered. After breakfast is made, the chickens are still alive.

It’s a grotesque metaphor and one that can even be insulting to product people because it makes them feel like their contribution isn’t valuable. Well, that’s part of the crux of it too:

It isn’t that the product development contribution as a chicken (product owner) isn’t valuable, it’s that software development is a moving train.

As a developer, so that I can achieve flow, I should have the materials needed to do the ticket (story) I’m working on without a lot of back and forth with the stakeholder.

In fact, the correct amount of back and forth with the stakeholder is 0 (zero).

Each and every back and forth costs wait states — that is, times when the flow of the craft (that is, building the software) has to wait for someone else in the chain. If this is you then your process is most definitely held back by wait states.

What does this have to do with chickens not speaking at standup? It’s not that chickens aren’t actually supposed to literally be quiet, it means that they don’t have a turn when you go around each “giving” your stand-up.

Why don’t chickens have a turn? Because stand-up is about 1) what code we accomplished yesterday, 2) what we’re working on today, and 3) removing blockers.

The chickens don’t actually accomplish coding tasks. They contribute to the coding tasks (things like wireframes, mockups, designs, written user stories, business cases)— these are called artifacts. But these artifacts, although they can help the process, aren’t actually the finished result of working production-quality code. (Except, arguably, in the case of web designs where the designs are translated into working code.)

It’s a really old, sexist, and out-dated adage that comes from the 90s and in 2020 it’s probably insulting to most.

I haven’t yet thought of a good replacement, because the core of the adage (which I admit is kind of nonsense on many levels when you really try to lay it all out) is about the fact that the production of the code is what matters. Or, if you will, the end result (which in software development is working code.)

Scrum assumes and prioritizes high-performance engagement. At the same time, it shines a light on low-performing tools, processes, and people. It is the “sunlight” that will disinfect any broken engineering process.

It ain’t easy, and it ain’t for everyone, but when practiced right, it remains the most engaged and accelerated form of software discipline today.

Tech Industry

The WFHpocalypse Part 2: Thoughts on On-Site Work

Years ago I was conversing with a friend in the field— a UX developer who worked at large Ruby consulting firm. It was the early teens, that is, 2010s, and Rails was taking off like wildfire within the startup scene of New York City. I too worked at this time in startups, first in advertising and marketing platforms, to app development, and eventually finding my way to e-commerce.

I don’t remember exactly which year he made this comment to me, but I remember the company: Gilt Groupe. I name-drop it here because neither he nor I have worked for Gilt so our observations are merely second-hand. (Typically I don’t reveal the names of the people or companies in my stories.)

Gilt employed 100 Ruby engineers, and bragged that at 12 noon, when their sales went on the site, they had traffic that was bigger than Amazon’s traffic. My friend came from a pair programming background— where everyone works in pairs. Each pair works on one thing on one computer, they use one monitor (typically, but can use two mirrored monitors) and two keyboards. In the old days, the actual advice from XP evangelists is to use one keyboard and slide it back and forth between the two developers.

(Obviously, the age of corona will challenge some of these old days of working.)

In a conversation about workplace culture my friend said to me that Gilt was a very “headphones on” kind of place.

What he meant, in short, was: a culture of overly intrusive management, little collaboration, perhaps even competitive behavior between the engineers, and people wearing headphones in an office setting to signify to their colleagues that they don’t want to be interrupted.

Many companies and the people who run them might find this challenging.

If your idea of building software is that you need developers to be available to you to be ‘responsive’ to changing needs— throughout the day— you probably aren’t planning your software very well. Constant interruptions will universally mean that your engineers won’t be able to truly focus on the really hard stuff.

But these aren’t great reasons to insist on growing your workforce on site. In fact, they are only reasons why you should question if you have the right tools in place.

Second, if you think you need developers to be onsite because you want to ‘talk’ to them about the product, then you probably don’t know what  a product owner is or could do for you.

A strong development effort will be led by a Product owner, who will have onsite meetings, and will write everything down in a neat, codified way for your engineers.

Third, and this is one of the most important: 40 hour work weeks are arbitrary. In fact, laborers used to work in factories more than 40 hours/week, and it was only because of the rise of unions that we now have a Monday through Friday workweek.

What’s important to software development effort is:

1) Focus (getting ‘in the flow’) and not being interrupted

2) Comfortable workspace, with good lighting, and an ergonomic chair or standing desk

3) Being mentally and socially stimulated, through interactions with others (“interaction” here implies either face-to-face or any other kind of online interaction for someone remote)

4) Not overworking.

Great software is built with effort, and effort makes you tired.

It’s natural to be tired after a good day’s work. It’s so normal we have a nomenclature around it: “go home to recharge” we say.

The most effective way to work is to focus on efficacy and recharging adequately. Stop worrying about everything else.

Pros of working onsite:

1) You should code onsite because face to face meetings convey more than you can over written words, stories, video chat, etc.

2) You should code onsite because coding is a collaborative exercise

3) You should code onsite because CEOs and managers like to see occupied workstations to make it look like people are working

In 50 Ways To Find a Job, Dev Aujla says:

“There are two types of jobs that you can get. One is the type of job where you mentally check out, bide your time, and collect a paycheck. In this job you days are filled with a type of work that often feels stressful, frantic, meaningless. The second type of job is filled with the kind of work that feels natural, that comes easily, that rejuvenates you, and that isn’t motivated by stress or fear.”

Dev Aujla, 50 Ways To Find a Job

When I read this, it struck me a slightly simplistic but I knew exactly what she meant.

It would be ill-advised to argue that onsite work is naturally soul-crushing, or that remote work is naturally better— because it wouldn’t make sense.

Fried and Hanmeir Hanson are such advocates of remote work (they wrote a book about it) that they call offices “distraction factories.”

But it doesn’t have to be this way!

Mentoring & Pairing

In Extreme Programming Explained, Kent Beck proselytized a practice known as Extreme Programming, or “XP” for short. In it, programmers pair. Pair programming means, specifically, that two developers work at the same time on the same code. In fact, the classic way to pair is for both developers to sit at one computer with two separate sets of keyboard & mice (mouses?). The programmers sit equally distant from one another using a big flat screen monitor (not one person on their laptop and the other “looking over their shoulder.”)

Typically, one developer acts as the ‘navigator’ and the other the ‘driver,’ but an effective pair will swap roles naturally and without formality. (As mentioned before, another setup suggested by Beck is having one keyboard that is passed back and forth between them.) The ‘navigator’ will be thinking big-picture about the code, where they are going, the interactions between objects. The driver is the one doing the typing, typically paying attention to the syntax of the API and each little detail as they go. But even when you’re the driver, it’s still exponentially better to have a second pair of eyes catching for mistakes.

When I first read about pair programming I was inspired. Although I’ve done and seen many forms of light pairing over the years, I’ve only known a few companies that have a two-programmer policy for all of their onsite work. 

When I can say from my decade and a half in this industry is that while there is no moralism to onsite vs. remote, there is often economy to it. To CEOs and managers, onsite workers seem like workers who can be managed, because they can see them. Onsite workers give CEOs and managers a sense of certainty. Onsite workers, especially engineers, artificially inflate the value of the company because investors and/or other companies who might buy you out will value a company more because of the perception created by having onsite workers.

Furthermore, there’s always been this thing in professional settings where people who work onsite schedule their own personal needs: doctor, dentist, the cable guy are all always seen as typical reasonable excuses for taking time off or working from home.

But what’s typically not so obvious to younger employees is that if you only take time away from your work for these “life necessities” things, you’re likely ignoring self-care in a dramatic way. I’m talking about things like exercise, eating well, doing your laundry, spending time outside. If you have kids, these might be the most important years of their lives: What parent wants to have to work until 6PM or later when their kid gets out of school at 3PM?

Most young parents I know in these scenarios have negotiated some degree of working from home or schedules that allow them to be with their kids after school (Like leaving a 3 pm to go pick them up.)

These kinds so of ideas make companies very nervous. They falsely equate time-on-the-clock with output, which is dangerous. Typically in that mindset, your onsite employees will produce a little as they can to keep their jobs and perform at the minimal output to make it look like they are good employees.

What if I told you a few dirty little secrets of onsite work.

Cons of working onsite:

1) Generally speaking, people who work onsite spend most of their time managing their boss’s expectations, going through what appears to be motions of looking productive and inserting themselves into structures of face-to-face interactions in your company to make themselves appear to be valuable. While I see this a lot from non-engineers, I’ve seen this plenty from engineers too.

2) Offices, especially open offices, are probably the most distracting place to code. People coming and going, other teams making noises, various managers coming up an interrupting you throughout the day. I think most engineers have all been there and know what I’m talking about.

3) Nobody actually works 40 hours per week or 8 hours per day. People come in at 9:45, they unpack their bag, they set up their computer, they go get some coffee. They stop at the ‘water cooler’ for a quick chat with their colleague. They might start about 30 minutes later (but even that isn’t a given.) They do a few minutes of what looks like work, but then they get distracted (see #2). At 10:30 maybe the team has stand-up, which should last 5 minutes but instead takes 20, and then they’re thinking about what someone said at stand-up. They take a few minutes to look up something new, or to check Facebook, or the news. All things considered, most ‘onsite’ employees typically have an effective workday that comes out to about 5 or 6 hours, or sometimes less.

This effect is compounded by being good. If you’re a rockstar, you actually have no incentive to work harder in this scenario. Why? Because the amount of energy you spend is related to the function of difficulty of the task you’re working on, not the number of minutes you are hypothetically sitting at your desk with that problem in front of you. If you get more done in less time, your company only gets more out of you but you (typically) make the same amount of money.

It’s a natural tenancy to manage our work in this way— what’s not natural is the concept that each hour equates to a linear amount of output.

Now, this is not to say that onsite work is the pits or to advocate for remote work— in some regards the opposite.

Some employees probably aren’t better working remotely. Some need guidance, supervision, or mentoring that can’t be done well remotely.

The thing I’ve been wanting to ask employers who insist that onsite work is better— often with something of an obsession with this subject as a moral divide is whether they’ve asked themselves some really deep questions about their own expectations:

1) Do really think that making people be onsite for 40 hours/week (or 38 hours, or 36 hours/week) actually makes them 40 hours of productive?

What if we had a 4-day workweek? What if we gave everyone the day off on Fridays, would they get 4/5ths of what they get done now get done in 5 days? Shocking to most suit & tie people, the answer is almost always no.

When you give people the freedom to work on their own schedule, employees nearly always work harder. Most engineers I know would rather work four 10-hour days (Mon-Thur) and take Fridays off. Why? Well, it comes back to flow. More compressed, focused work is almost always more effective.

2) As a CEO/manager, you know that programmers spend a lot of time at their keyboards. But what’s all this other stuff they do? Talking, diagrams, sometimes just walking around the block.

In software, we call this stuff at the computer GAK — short for “geek at keyboard.”

What software engineers don’t tell you is that lots of the hard problems aren’t solved at the keyboard. In fact, sometimes when you are solving the hardest problem, the best solution is to walk away from the keyboard and start to come up with a mental model of how to approach the problem.

This mental modeling can happen in many forms. Sometimes a team of people can do it (together) in front of a whiteboard. Sometimes you really need to take a walk around the block.

I’ve found personally that the best code I write I actually have to give a great deal of thought to before I write it. Sometimes these mental models take me hours of just thinking about how all of the parts might fit together. (Admittedly I might be doing more architecture work than coding in these cases, but the point is the same: A lot of the work is conceptual, not typing.)

3) Are 8 continuous hours of work, starting at some abstract time like 9AM or 10AM and running through 5PM or 6PM, really the most effective way to get coding done?

In my experience, I tend to work very hard in the morning for several hours. If I get 3 hours of focused, uninterrupted time in, I know I’ve had a good morning. (If I can get 3 ½ hours, I’ve had a great morning.)

The level of concentration required to code is intense.

If you still don’t believe me about the lack of a connection between hours on the clock and work produced take this anecdote: I noticed years ago that if i did very little in the mornings I wouldn’t get hungry to the late afternoon. If I worked very hard, I would get hungry earlier (like lunchtime). After thinking about it more, I realized that work is related to synapses firing in your brain.

Since many of the easy problems you will encounter will be solved quickly, as an experienced software engineer what I’ve seen most is that the hard problems are the ones where the work takes mental energy. It’s the hard problems that demand that I concentrate and focus. It’s the hard problems that make me more tired after 1 hour of coding than 3 hours of non-coding. It’s when I’m solving the hard problems that I don’t want you to talk to me.


Now that’s not to say that working onsite is useless: in this age of quarantine, we are reminded of all of the subtle ways Zoom is inferior: 1) interactions are negotiated over email, Slack, or another messenger, any choice of which forces one to confront the explosion of messaging options we have available today (a task that could give anyone choice anxiety).

2) Sitting in front of Zoom screens involves both a kind of participation and acting at the same time, as you become keenly aware of looking at the feeds of the other participants and also the fact that they are looking at you.

3) There is something just slightly lost about the remote pairing. I’m not sure what it is, perhaps it is the added need to negotiate the control of the typing (something that comes a little more naturally while in person). Perhaps it is the fact that onsite pairing makes both partners focus precisely on one thing until it’s done. With everyone at home, there’s a propensity to be distractible with either activity in your home or other activity on your computer (that is, you can keep the ‘Zoom window open’ while doing something else). It just doesn’t seem equal to me to what we now call ‘f2f’ (face-to-face)

4) The actually productive meeting— you know, the elusive one that everyone dreams about having— in which ideas were written out onto whiteboards and then erased and re-worked and you felt like the meeting itself was the process of coming to the group decision. (Not those ill-managed ones with endless circles of indecision gripped by design by committee.)

Those things are lost, sadly, in our Corona-Zoom era.

Maybe Coronavirus and the new remote work paradigm will mean that working remotely becomes de facto. (Likely, when the quarantines are lifted the culture clash between onsite vs. remote will become even more stark.) Maybe not.

But one thing I can tell you is that a lot of people are experiencing this for the first time who’ve never even asked any of these tough questions— if you’re at your desk, are you really working? If you’re writing code, what’s the hardest part of what you’re doing and what’s the easiest? Are you really more effective working longer hours, or are you more effective working shorter, focused hours? What sacrifices are you making to be onsite at a job continuously throughout the day?

I’m not saying remote work is for everyone or every company, just like onsite work isn’t for everyone. But I know for sure that anyone who tries to tell you that one is morally better than the other is full of malarkey and that a huge number of engineers who never been pushed for efficacy simply don’t know what it is. In fact, I’ve also seen the dangerous opposite: A developer who is oriented by the company to put their energy into thinking in the ways of the existing structure and not outside the box. While this is a natural effect of being employed, it isn’t a normal or rational choice for any highly motivated engineer. That’s because the pace of change in this industry is lightning fast.

Tech Industry

The WFHpocalypse Part 1: Thoughts on Remote Work

Since the global quarantine working-from-home, or remote work, seems to be everything anyone can talk about. 

About 15 years ago when I started my career, software engineers were a rare, abstract breed of professionals. People knew of this profession, talked about it, but few had met someone who actually worked day-to-day writing code.

In that day (around 2005), the term ‘colocating’ became a thing.  Colocating basically means, albeit confusing, when people work in the same office. So in other words, working onsite. (In the context of a computer or server that is co-located, and I realize I’m dating myself by even mentioning physical servers— yes, actual machines— colocate meant to host your physical server on-premises with your company or in an existing data center with other physical servers you also operate.)

In my experience, even by 2008 it was regular and normal for more non-tech employees to work from home (or “WFH” as is abbreviated) at least one day a week. Jobs were routinely advertised or negotiated with a certain number of ‘WFH’ days. Engineers worked from home even more, many working remote all of the time. 

Then around 2013 some of the larger tech companies started to crack down on their employees working remotely. Marissa Meyer, the new head of Yahoo, famously set a no-work-from-home policy, vexing many of the Yahoo employees with families who had felt they had been hired under the auspices of workplace flexibility.

Years before this, a manager told me that they had no shortage of mediocre people who would show up onsite and work 9-5, but virtually no candidates who would work a regular 9-5 job and were exceptional. The exceptional candidates, she explained, all had demands—workplace flexibility chief among them.

My own experience has born this out too: The best developers I ever worked with were ones who worked remotely.

In fact, two that come to mind worked for companies in New York City — where I was an ‘onsite’ employee on the team — and did all of their work entirely remotely. While both of these people came to New York to visit— maybe once every six months — neither preferred to move to city that was more expensive than where they already had a good living.

Both of these guys were rockstars, and I mean rockstars in the truly respect-worthy sense. You’d have meetings with them and the next day they’d already been whipping up some sample code. They outperformed every member of the large team consistently week after week.

Except one time. Stan (one of these exceptional developers) and Peter (the product owner), and I built a system. It was a management workflow for an internal agency. In the agency, the projects had a natural step-by-step flow from concept to ideation to wire framing to production to delivery. Sounds straightforward enough.

Peter the product owner had written a complicated state diagram that showed while the normal set of steps was 1, 2, 3, 4, 5. Sometimes a project could go from step 5 back to 2, or from step 3 to step 2. His diagram had a number of complex transitions based on what the stakeholders had told him upfront about their needs.

So Peter, shows this to Stan (who happened to be my boss at the time) who immediately solves it using every developer’s favorite pattern: a state machine.

The state machine implementation was beautiful. It had gated locking mechanisms that allowed transitions only through the specified transition steps. We deployed it; everyone loved it. It was the first time the agency has had a custom workflow built around their needs.

“This thing is great,” someone tells me as feedback. “But why can’t I go from step 4 to step 2? Why must I advance to step 5 before going back to 2?”

Peter, me, and Stan have a meeting. “They love this product but we don’t need all these rules blocking people from just putting the project into the state they want it to be it.” Dammit, Stan thinks, then why did you draw me a fancy chart that said you did up-front?

Whether this was a failure of product development or imagination I can’t say, but it always struck me as funny that here I was, much more junior to Stan’s experience in Ruby on Rails, and I had an inkling that the fancy state machine he built wasn’t going to be as useful as he thought. (Of course, he was my boss, and obviously it is what ‘they’ had asked for, but I still pictured the agency’s chaotic workflow and thought to myself, they’re going to want to change the states back and forth and back and forth, why are we building this state machine that locks them into specific transitions? I didn’t bring up my objections to Stan at the time.)

Not only had he overbuilt something, but he’d in fact built something that was a hinderance to the end user (exactly what you don’t want in good software). In our case, our user base was small and forgiving (only the people in the company), and so the impact was minimal. We removed the transition locks (but still kept the underlying ‘state machine’ coding pattern) so that a project could be moved from any state to any state. All things considered, nobody creatives were harmed in the making of the software and the cost of being wrong was low.

But building that state machine wasn’t a small bit of coding. Not months of wasted work, but it probably was weeks of wasted work.

It always struck me that Stan, a bit of a left-brained nerd, always seemed to related to the code better than he could relate to the people.

And therein lies the heart of the tension between the ‘onsite’ world and the ‘remote work’ world: Are you relating on an empathic level to the company’s needs or are you relating on an technical level?

I’ve heard all sides of the onsite vs. remote-worker divide. Since it’s such a controversial topic, I’ll list just some of what I’ve heard about the benefits of letting your workers work remotely: 

1) As a company, you can have access to a much more significant pool of talent

2) Your workforce is more likely to work harder

3) You can distribute workers at different times to allow for asynchronous work to be done— for example, an engineer codes a feature in afternoon and someone else does QA on that feature in the evening. (In a traditional onsite workplace, your features won’t get QAed until the following day when people come back into work.)

4) Workers can live somewhere significantly cheaper than the expensive tech hubs of San Francisco or New York City. While many people think the cost living difference might be a factor of 50%, I estimate the true cost of living difference, after taxes, between an expensive city like New York and the mid-west is probably 400% (that is, it’s 4 times more expensive to live a “city lifestyle” in an expensive city than it is to live “normal lifestyle” anywhere else.)

5) Offices are distracting and not conducice to concentraion. In their book Remote,  Jason Fried and David Heinemeier Hansson ask

If you ask people where they go when they really need to get work done, very few will responde ‘the office.’ If they do say the office, they’ll include a qualifier such as “super early in the morning before anyone gets in” or “I stay late at night after everyone’s left” or “I sneak in on the weekend.’

I’ve seen both sides of this coin. On the onsite side, I’ve done onsite pairing, product development, white boarding, led and participated in onsite meetings, done mentoring of junior developers face-to-face, and also been the coder with headphones on at his desk (if you don’t know, when an engineer wears headphones you can think of it like he or she is wearing a “don’t talk to me right now” sign)

I worked remotely for years as a consultant, and I “worked from home” on and off though many jobs in New York City too.

I am ambivalent about the debate: one the one hand, I have found onsite work to be stimulating, connecting, and effective, especially when you have a strong team (one that makes you want to come in every day.) I’ve also found onsite work to be draining, exhausting, and, worst of all, distracting. How can both of these things be true?

To be effective at what we do, developers need something that we call ‘flow.’ For laymen, thinking about the last time you wrote something— something difficult. Maybe a paper for school or a presentation that required a lot of writing. Let’s say you sit and start writing. At first, your attention is easily diverted. What you’re going to have for lunch, what your wife said to you yesterday, the sound of the ticking clock. You know you aren’t concentrating, so you concentrate harder. After about 15-20 minutes, you are typing sentences and they seem to be flowing out of you. Then someone comes in the door and asks you a question. You look up, “Sorry, what did you say?” Half of your brain is still writing, but it was just jarred into something else by the event that distracted you.

Programming is like this every single day, and the need for focus is even more dramatic than most professions. Non-programers typically underestimate the significance of flow in software development. 

Product development, which is a constant back-and-forth with stakeholders, is something that dramatically benefits from people being in the same place. If you sit with the people you are building software for you are more likely to connect with them on an empathic level, a key component to being a great product owner. As the state machine example with Stan demonstrates, sometimes if the only things you interact with are computers you begin to identify more with the computers than the people. (That’s not a good thing.)

Self-starters remote employees like Stan and Thomas (and yours truly, often) — partly because they have the privilege of working remotely— are focused on results precisely because nobody is looking over our shoulder or clocking our hours.

The Coronavirus is a paradigm shifting event to the WFH vs. onsite worker debate: Everybody has to stay home during the quarantine. Companies and teams that have never worked remotely are suddenly forced into this situation. I can only imagine CEOs and Human Resources people who have looked down on remote workers judgmentally— which sadly I fear are the majority— are going to loose their shit.


In the next post I’ll explore working onsite work, how it has changed, where it shines and what its pitfalls are. 

Be sure to LIKE & FOLLOW for Part 2.

Tech Industry

Working from WeWork During a Pandemic

I sit here as the World Health Organization declared last night a global pandemic due to COVID-19, and all everyone can seem to think or talk about is how scary this virus is. Nearly universally, as universities and events were canceled this week, workers across America began to work from home.

Zoom Conferencing, global platform for video-conferencing, arguably, the company to have defined the video platform industry, is up on the stock market today, as a sign of the times:

  • Loading stock data...

Everywhere I turn it seems people are freaking out: scared to ride the subway, travel is down, stocking up on supplies and food, and talking about working from home.

In a cafe that I went to yesterday — normally a quiet haven of distraction-free concentration — a woman sat with her laptop on video conference. “Is this working? Can you hear me? We’re all video conferencing, then!” she said in her slightly British accent. Amateur, I thought, as I put on my headphones so as to maintain some semblance of separation between me and the talk of the pandemic.

Although I couldn’t help but be struck by one obvious paradoxical fact: The cafe I was sitting in posed no less of a risk than a traditional workplace. In some regards, with that many people touching tables, touching counters, etc, there isn’t any reason to think that the virus won’t spread just as easily if we all work from public cafés.

Here at my WeWork office earlier this week, people were talking about adjusting to this ‘new normal.’

“I heard they aren’t testing here in New York,” I overhear someone say. “I heard they are but they don’t have enough test kits,” someone else responds. Mostly, it’s the fear of not knowing that seems to be the driving force of stress.

The small coffee shop where I get my coffee (not the café mentioned above) had a sign today that read: “Out of an abundance of caution due to COVID-19, we are not doing ‘bring your own’ container or using ceramic mugs for the time being. Thank you for understanding.” Of course, I think, the customer touches the reusable to-go cup (like those metal mugs people use to reduce waste), the barista touches the container, the virus spreads to the barista’s hands, they give it back to the customer and then go on to spread the virus to the next customer’s order, and the next.

And so the small modifications to our lifestyle and adjustment to this ‘new normal’ begin.

As I sit and write this from WeWork DUMBO (in Brooklyn) it strikes me how ironic it is that in fact, working from WeWork is probably rather safe right now. Why? Well, for one thing, there’s nobody here. Office after office has shut down, people are working from home. This place feels like a ghost town.

Furthermore, people who work at WeWork tend to be adult professionals who act in a highly professional way:

  1. Of course everybody washes their hands every time we come in or go out.
  2. I get to walk here, which helps me avoid the germ-infested subway system of New York City.
  3. The entryways have contactless scanners I don’t even need to touch my card to the turnstiles (I can just wave it over).
  4. I touch only the elevator buttons and can even do this with my elbow.
  5. I wash my hands immediately upon walking in
  6. Every surface is washed down, now (as I’m told by general announcement) multiple times a day.

Hospitals used to be a place where it was presumed that you would go to die as people generally went into hospitals and didn’t come out. Then doctors and nurses started washing their hands and people started coming home from the hospital.

(In 1846, Ignaz Semmelweis, a Hungarian doctor noticed something interesting about two wards: In the maternity ward run by students and doctors, women giving birth were more likely to develop a fever and die than the women in another maternity ward right next door that was run by midwives. He noticed that doctors and nurses often visited expecting mothers after performing an autopsy on a dead person. As a result, Semmelweis mandated handwashing with chlorine for doctors. Suddenly, new mothers stopped dying at the rates seen before.)

Today, doctors and nurses who work in hospitals will tell you that they wash their hands not to protect themselves from disease, but (largely) to protect their patients. The biggest risk is not from a doctor or nurse who is infected, but from passing germs from patient-to-patient as they make their “rounds.”

That’s why the medical profession has always said (and keeps saying): wash your hands!

The truth is, there’s a lot about COVID-19 that we still just don’t know. Panic is a manifestation of the threat to our egos. It’s a normal reaction, but the stress it causes is useless. Whether you are working from home, a café, or an abandoned (and probably septic) WeWork, there’s nothing to support the idea that just changing our work location, in fact, puts you in any less risk. Washing your hands, not touching things unnecessarily, avoiding travel, and keeping a strong immune system are probably the most effective things for you to put your energy into.

Honestly, for myself, I think the WeWork is the safest bet today.

Tech Industry

Queens JS (4 Apr 2020)

A photograph on the backdrop of coronavirus panick throughout the city.

Angus Grieve-Smith (@grvsmth) treated us to a display of how to audio interfaces using native web controls, appropriately titled “Web Audio With Javascript” Their website is

Angus shows how to use native Javascript to build an audio recorder.

Peter Karp talked about Pydantic, a python validation tool. He compared it to marshmallow, a tool he says is outdated and was used for serialization, validation, and typing in JSON manipulation.

Peter Karp talks Pydantic

And finally, Tracy Hinds (@HackyGoLucky) talked about conflict resolution on software development teams.

Tech Industry

The Interface IS the Experience: Why the iPad Changes the Game

In 1984 the personal computer industry was forever changed by the first Mac. More expensive and less familiar than the DOS-based computers that were gaining popularity, the Mac was a first: It shipped with a point-and-click mouse standard and its core operating system – the thing we used to tell the computer what to do – was a flat desk-like surface. Once we got used to the idea, we could move the pointer around with the mouse, move “icons” that represent ideas on our virtual desk from one place to another.

The nomenclature would take a few years to solidify: the virtual desk would be known as the desktop, the things we move are “files” and like in an office we would put files into folders and even “trash” them when we no longer wanted to keep them around. The virtual concept was borrowed from the real world, and it allowed us to relate to our computers in a way that was both familiar and more intuitive than typing esoteric commands into a terminal window.

Within years the idea of a point-and-click interface would become standard for the (Microsoft) PC world too.

What made the critical difference was the interface that we used to get what we wanted out of the machine. The first Macs could pretty much do the same things that their predecessors could, but you didn’t have to relate to the machine in a cold, distancing syntax that involved learning a series of special things to type. Admittedly, the new point-and-click paradigm did train us, that is – we had to learn its specific idiosyncrasies – but it met us halfway by making the concepts simpler and the interface more intuitive.

Computers by their nature are impersonal. They expect specific instructions to accomplish specific tasks. In the early days, computer programmers were nerdy, anti-social specialists who worked in a world based largely on higher math. Most software in the early days – and most software that the general population doesn’t see today – crunches numbers and performs calculations at levels of higher math most laymen would have no understanding of.

The mathematicians who became computer scientists were classically not very good at understanding people. Our tendency (in the early days), was to make software that works more like the way the computers think than the way that humans think.

Using a modern operating system today, and particularly Mac OS X, is much easier. First, the computer shows us a graphic representation of what’s available right away – without us having to ask for it first. The Mac OS is organized to have default places for our documents, photos, and music. Let’s face it, who cares where it is on the hard drive? Well, the operating system does. The file/folder hierarchy is an arbitrary (albeit necessary) way to organize ideas and things on our computer whose primary function is to meet the needs of the computer, not the user. This has led to the classic story of users saving things some place and not knowing how to find them later.

To make things easier, Apple pioneered the idea that specific software can be written to address the user him or herself. You can search your hard drive, for example, by entering just a few letters or words that might be contained within the document you are looking for. This search can be “canned” so that you can view all things relating to your daughter’s school into a “smart” folder – a folder that doesn’t really exist but it is a representation of all the relevant files all over the drive.

This is just one example of the paradigm shift towards the rise of what Alan Cooper calls in his 1999 book “human interaction design.” 1 On top of the nuts and bolts that make the computer work are layers and layers of software designed to give a unique, albeit somewhat artificial, experience to the user.

Yesterday Apple announced the iPad – a sleek, 10-inch version of the iPhone whose only input is the sophisticated touch screen surface. Is this the latest in an over-saturated world of gadgets, gizmos, toys?

Sure it is the latest tech-bling, but taking a step back most observers would argue that it represents something larger: a paradigm shift in how we interact with our world. The iPhone has become a staple – a near ubiquitous characteristic of our modern world.

Jobs explained yesterday that they have long asked themselves if there is room for a tablet in the market place – something bigger than a smart phone but not quite the same as a laptop either. “If there’s going to be a third category between smart phones and laptops, it is going othave to be far better at some key things – otherwise it has no reason for being.”2

What makes the iPhone – and now the iPad – a game changer is are three key things: (1) There is no mouse or keyboard, (2) the device is portable, and (3) software can be written for it after it is sold and in the wild.

The lack of a mouse and keyboard matters because it, like the point-and-click paradigm was when it was first introduced, changes how we think of interacting with a computer. The iPad’s fundamental change is that the user interface is entirely based on using fingers to make selections from the screen. The interface designers know this, so the way the interface is created takes this into account. Obscure tasks aren’t hidden away in deep menus and submenus. There are just a few options presented to us on any given screen. Higher level choices let us navigate to places where we can make more specific choices (like a phone tree). All of these facets are native to the iPhone/iPad paradigm and represent a key shift from the desktop computer model which has been predominant for 30 years.

Two, portability. Computer use is no longer a matter of sitting at our desk (or internet cafe) and staring at the glowing box (as a friend of mine likes to say). I can be walking down the street and want to find the nearest place to get lunch, open my iPhone and launch the AroundMe app, and see a visual map with push-pins showing where I am and where to eat.

Of course it is possible to do this on a computer too, but it is unlikely that I would have gone to the trouble. Or if I did, I would have to have done it in the morning before I left my house. The immediacy of the iPhone makes it something that not only is useful on-the-go, but means that we will think about being on-the-go in new ways. Most people with iPhones just don’t use Mapquest or Google maps on their computer anymore, because as long as the network converge is good, we know we have access to a real map showing us how to get where we are going with us at any given time.

Portability affects everything. If we’re waiting for an important email we can go to our kid’s little league practice knowing that the device will let us look for an respond to that email should it arrive.

Finally, the third key difference is that software can be written after the fact. Apple knew that the market and users themselves would drive the way the iPhone was used. The same thinking has gone into the iPad.

Apple doesn’t have to think of every way it could be used, all they have to do is make the best hardware they can. The uses will come, and because you can load new applications on to the devices like a computer, the possibilities are endless.

The iPhone has already changed the way lots of us think about being connected. What will the iPad do? Well, the simple answer is take us a lot farther. The interface is the experience. A bigger screen will mean more possibilities, more screen space to layout information, and that will lead to new and interesting cases for use of the information.

1. The Inmates Are Runing the Asylum by Alan Cooper. Sams Publishing, 1999
2. Steve Jobs, Apple Special Event Jan 27, 2010