As every product team knows, designing a good product process is hard. As the team grows and business strategies change, the dynamics of a product team change, and processes must evolve to fit the new normal. At Thumbtack, we review our process together as a team once every quarter and deliberate together how the process should evolve. It’s a day long activity that has resulted in significant shifts in process over the years-- from focus teams, to 2 week sprints, to task-based systems, to teams. And thus far, it, along with great home-cooked food, has been critical in ensuring a growing team of happy engineers, designers, and product managers.
What is the process optimizing for?
We treat our product process like a product and iterate on it in order to find the process that best meets its objectives within a set of constraints. We’ve found that the product process has two major constraints: the size of the team, and the type of business priorities at the time -- how many, timelines, and whether they were more execution or experimental. In additional for these two satisfying these two constraints, we believed the product process should:
- Foster individual ownership and promote excitement about work
- Be effective against meeting company goals
- Disseminate knowledge and context effectively
- Support day-to-day needs of the company (e.g., emergency problems, engineering requests from other functions)
- Maximize time spent working toward goals and minimize overhead
- Ensure high quality software and design across the product
- Scale as the team grows (This came later).
How does the product process get changed?
As a product team, we hold a Product Process Review meeting (PPR) at a regular interval to discuss the process, and any changes to the process agreed upon during the meeting is carried out until the next PPR. The inaugral meetings was due to a disagreement within the company early on about how the product process should be. But since then, it has been invaluable to improving the process and maintaining morale.
The review meeting is held at regular intervals (every 6 weeks and later on, every quarter). The entire product team (engineers, designers, product managers) meets and often dedicates an entire day to the discussions and decisions. First, we reflect together on what went well, and what could be better. Each person is expected to offer an opinion and all comments are written down. Then, we group the #couldbebetters into main topics, and break up in to cross-functional teams to discuss potential solutions. These discussions usually take an hour or two, and may go for much longer if there’s heated debate. Finally, we regroup as a team for each topic, discuss different solutions, do a majority vote, and make a decision. If a particular decision is a close call, then we’ll flag it to be revisited explicitly in the next PPR. This entire process is documented and sent to the entire company to keep everyone else up to date.
The early years: waterfall
In the very early years of the company, we had a waterfall process between engineering and design. Most design-heavy projects went from UX wireframing to visual designer to markup to an engineer 6 weeks later. There was no formal design review process, and projects that involved design were done by engineers with minimal involvement from design. The engineer would feel no ownership of the product, and find many issues with building it effectively. This bred discontent. Startups galore have realized that waterfall does not work, particularly in this early stage of the company where the focus was on quick experimentation and ownership. We scratched that process very quickly, and I will not dwell upon well-known reasons of why waterfall doesn’t work here.
Focus and iterate: focus teams
Focus team formed as a way to quickly experiment and iterate around certain objectives. As its name implies, it optimized for focus. Thumbtack was still in its youth and needed to find product-market fit. We had 8 engineers and 2 designers, and each team was responsible for one company initiative critical to the company's survival: "find a monetization strategy", "increase request volume" or "get customers to hire".
What worked well
This structure was very effective for focus and coordination around a particular goal and getting things done. Many, many experiments were ran and Thumbtack iterated into a model that led to our current growth trajectory as a company Everyone felt great ownership over what they were working on and were involved in the ideation to implementation process.
What didn't work
- Nothing outside the focus of the focus teams got done.
- The objectives were so massive that these teams went on and on, sometimes for more than 1 year. * The engineers overwhelmingly felt "stuck" and that they had to develop such deep expertise in an area that they could not move to another product area.
- Setting objectives were ambiguous. The process of determining what the focus teams worked on felt top-down, and stifled ideas from the rest of the company
One team, two week sprints
After finding product-market fit, the company’s priorities shifted to growing the current model. The product team decided it was a good time to move out of focus teams and to a process that lets engineers have more fluidity between projects and autonomy for anyone in the company to propose new projects. The outcome was 2 week sprints.
- Each sprint started on a Tuesday (so engineers didn't have to rush to push code Friday evenings).There was a retrospective in the morning where everyone would state what they intended to accomplish, and what was accomplished (this adds accountability). And then everyone would then either continue projects or pick up new projects, and kick-off meetings were held Tuesday afternoon.
- We tracked progress in Trello cards. There were 4 columns: Prioritized, In Progress, Not Doing, and Finished (with the week it was finished in) and each person would put his/her face on a card when it's picked up. This was a great way to share who was working on what. There were separate Trello boards to track bugs, small polishes, and project ideas / pipeline. A bug would only be picked up if it was urgent (and we defined urgent as blocking the core the requesting or quoting process), else it was simply tracked. We then did bug and polish weeks with the entire team every quarter.
- PMs or anyone else on the team could write project proposals for features or infrastructure related work . These proposals covered the goal, why it’s important, details, resources, alternatives.
- PMs, along with 1 engineer and 1 designer met weekly to discuss and prioritize projects. In the in between weeks, the meeting were more for looking at higher level data, discussing upcoming features, and triaging bugs and emergencies. Anyone was welcome to join the prioritization meetings (or any meeting really), and was especially encouraged if they had a project they wanted to work on. Notes from meetings are sent to a general mailing list everyone is encouraged to subscribe to.
What worked well
- There was clear visibility and accountability of who is working on what. The entire team would at the Tuesday retrospectives to collectively discuss progress and learnings from their projects.
- Process mandated substantive project proposals with specific details on why and how a feature will be built. This meant in depth thinking went into each proposal.
- It was flexible and the team was able to shift priorities much more quickly
- Lightweight with few overhead meetings during sprints
- Team members no longer "stuck" on a project. Instead, they had the option to move on to a different project every 2 weeks.
What didn’t work
- There was not enough time to think about the next cycle's work or to estimate the complexity and time properly. Engineers began to feel like because they didn't have time to think about the project proposals, they simply "got put on" new projects
- Design doesn't work well within 2 week boundaries nor does it sync well with project timelines
- Quick deadlines force a focus on getting things out rather than on quality. In addition, prioritizing infrastructure projects along side product projects was difficult and given there was not a infrastructure focused team at the time, eng infrastructure suffered (and we paid for it many times over later).
- With everyone working across the product and the opportunity to switch every two weeks, sharing context and knowledge needed for a particular project got much harder. In addition, given desires to move on, often projects were left unfinished or unpolished. There was then no process to continue them.
Onwards towards tasks!
After trying out the 2 week sprints for 4 months, we felt that it was too long for certain projects, and too short for others. There were lots of "several day" projects that was awkwardly tossed around and then informally done. And so, the team decided to try a task based system to accommodate these smaller tasks.
- Projects were broken down into small tasks that are estimated to take one day. Prioritization of tasks happened on a weekly basis based on historical velocity by a PM and lead eng, but work was not broken into cycles and progressed continuously.
- Engineers were expected to pick up cards as they finish previous ones, and could “claim” a set of cards if they belonged to one project.
- We continued using Trello to track these tasks as Prioritized, In Progress, Not Doing, or Finished. We also had tags for cards/tasks that were added mid-week for any reason Bugs and polish processes were kept the same.
What worked well
- Accommodated smaller tasks being done
- Forced scoping of projects down to small tasks, which forced more detailed planning and time estimation upfront. There were often cards added just for scoping.
- We realized that there are many tasks (as much as 50% of all cards) come up mid-week. We didn’t fully appreciate this before this process.
What didn’t work
- No ownership or accountability at a project level for engineers, which resulted in us not being able to effectively hit company goals
- In a world with no project proposals or ownership at a project level, there was essentially very little context on why a task needed to be done, and very little motivation to get it done.
- Projects were broken up and one part (likely the easiest or most interesting part) would be implement but not another.
- Time estimations were rarely accurate, and hence while there was an expectation that we finish what was prioritized the previous week, this rarely happened in practice.
- Design was very difficult within this process. Essentially, it did not work in this process at all.
Breaking into teams
The task-based system got messy quick, and it was our shortest-lived process after waterfall. The team meanwhile raised another round of funding and was poised to grow dramatically in the following months. It was clear that our 20 person product team will soon be (if not already) too big to operate as a single team. It was increasingly difficult to share knowledge, hold each other accountable, or simply fit into one meeting room. And so, we moved back to the one scalable solution: Product teams.
- Product teams were given a mandate for one quarter and were in sync with the company’s quarterly planning process. Team objectives were set by the company leadership with heavy input from the teams and staffed accordingly. In Thumbtack’s case, we had Infrastructure, Mobile, and 2 product-focused teams.
- Engineers gave preferences to which team they’d like to be on, and the engineering managers got together to staff the teams based on preference, need, skillset, and seniority. (We tried but scraped a pure lottery system first where engineers were randomly assigned numbers and teams were filled based on 1st preferences in order of numbers). Engineers were given tokens if they did not get a 1st choice so they had priority next quarter.
- Engineers are expected to switch teams each quarter to avoid being stuck on teams for too long.
- Each team were given the freedom to choose tools, metrics, and processes as they saw fit.
What worked well
- There was strong accountability and ownership within teams. However, some teams had members constantly switching in and out (interns, new hires), and these shifts clearly shifted both the dynamics of the teams and the level of ownership
- Teams were empowered to adapt and change priorities as needed
- New engineers had a smaller group to work within and bond with.
- Pairing happened more than ever with teams, both within and across teams
- Designers finally fit into this process well and could work alongside engineers and PMs as a part of a coherent team.
What didn't work
- Cross-team sharing and syncing did not happen as often as needed since each team was on different tracking tools. The team leads also did not sync as formally as needed
- Scoping upfront was important to the notification within teams. Teams that were too broadly scoped lacked accountability and focus. Teams that were under-staffed were unable to meet its goals no matter how hard the team worked
- The things that fell outside of the teams’ scopes were hard to pick up.
In general, teams worked well for the growing size. We made some small improvements after one quarter. We focused on scoping the teams better and explicitly assigning team leads for each team. We also learned that cross-team communication does not happen naturally, so we put meetings in place for team leads sync and time during team meetings to share among the team, and all got on the same tracking tool.
Overarching themes across product processes
Across these four iterations, we learn much about what works and what doesn’t work for us. There is no one-size fit all solution, and the best process depends on the size and business priorities at the time. Here, however, are some things we found consistently worked and did not work:
What we love and know works
- Aztec process for dealing with smaller day-to-day needs of other parts of the org: one engineer would be designated the Aztec for a week and everyone would send their requests to a dedicated email that the Aztec monitored. If it was a small task, the Aztec would just do it. If it was a larger request, it would be tracked and picked up when appropriate (e.g. next quarter, during bug or polish week, etc)
- Having a dedicated bug squashing + polish week every quarter where we as a team get through as many small changes as possible to polish and clean up the existing product. There are typically small 1/2 day to 1 day tasks, and can be clearly laid out on a Trello card.
- Design and engineering building and iterating together: any process that promoted engineers and designers to work more closely together resulted in better product.
- Having a product process review process to change process as needed: critical to a growing team!
What we haven’t figured out
- Time estimations are inaccurate as complexity is hard to understand prior to starting and we consistently under-estimate how much time recruiting and small interruptions from rest of the company takes.
- Lack of formal QA process. Quality relied on Engineers and PMs (and often CS) running manual tests and building unit tests. As the company scaled, this got harder and more bugs started being introduced.
- No good way to store all the knowledge across teams and keeping the rest of the company up to date. There is a constant battle on sharing and disseminating context that gets exponentially harder as the company grows. Thumbtack to date hasn’t tried set release date or formal release notes, but perhaps in the near future, we’ll be at a size where that is valuable.
That is where Thumbtack is in its current product process. As we grow our product team, we’ll have much more to learn in terms of how teams are best run and how they stay in sync across the company. Is the a different process for infrastructure teams vs product teams? Will we have different processes for building known features (e.g. reaching parity with web on Android) versus experimenting with new features? How can we stay better in sync with the rest of the business and their needs? What channels of communication will we have?
Interested in joining us? Find out what it's like to work at Thumbtack!
Go is a nice minimal language that's easy to pick up and start using. Unfortunately one question that Go doesn't have a good answer for is package management. The official take on the subject is to vendor 3rd party dependencies, but the list of tools that could help with the process is simply overwhelming.
Fortunately, there's a straightforward solution that doesn't even require additional tooling (well, almost).
go tool uses
$GOPATH environment variable to search for packages
import statements. Just like its cousin –
$GOPATH can take multiple locations and goes through the list until the
package is found.
Having a single directory in
$GOPATH is officially
encouraged. However, prepending it with a project-specific directory
holding its dependencies is a convenient way to let the
go tool know where to
look for them. You just need to update the environment when you switch projects.
Doing this manually every time is too much work, but go-vendor is here to help! All you need to do is to copy the vendor script to the root of your project. Then, before you start working on your project:
% source vendor
And after you are finished:
The first command updates
$PATH with the
devendor resets those paths to their original values.
Pros and cons
There are several advantages to this approach:
- You don't need to depend on 3rd party tools.
- You don't need
- You can continue to use the
- Most importantly, there is no magic whatsoever, you actually know what is happening under the hood.
The only downside is that you need to include the vendor script in your repo. The script is small and easy to understand, so arguably it's well worth the convenience it brings. It's also very unlikely to change.
To vendor a 3rd party library into your project, you simply
go get it, strip
.bzr, ...), and commit to the project's repository. You
can then use the
go tool as you would before, no need to prefix or replace it
with anything else. Just
source vendor once to make vendorized dependencies
go-vendor is available on GitHub under MIT/X11.
From the July 28, 2014 SoMa Tech Talk series.
Abstract: As anyone with A/B testing experience can tell you, the humble A/B test is loaded with complexity and pitfalls. Seemingly basic questions of experimental design and analysis are surprisingly difficult to get a handle on, even for those with a background in statistics. How long should I run my test? Which calculator should I use? What confidence level is appropriate for me? In this talk, I'll discuss my attempts to use Monte Carlo simulation to put these questions into a very practical context: how do various choices affect your ability to achieve a higher conversion rate when all is said and done? I'll sprinkle in some interesting statistics and engineering tips along the way.
From the July 28, 2014 SoMa Tech Talk series.
Abstract: Evan Miller will be speaking about visually appealing ways to "supercharge" traditional descriptive visualizations with inferential statistics. Evan is the author of the popular Wizard statistics application for Mac.
"I call our world Flatland, not because we call it so, but to make its nature clearer to you, my happy readers, who are privileged to live in Space."
(Edwin A. Abbott, Flatland)
A linear commit history is a fine, beautiful thing. It keeps developers sane. It keeps beastly merge commits at bay. It removes pollution from the history. It enables faster debugging. And, like any useful tool, the linear history is a useful mental construct for thinking about code and changes to that code.
A linear commit history relies on a powerful git mechanism called the
rebase. You might have heard
fairy tales of how rebasing is dangerous or how it can corrupt your history. Yes. Like most powerful
tools, in the hands of a novice, the rebase could be problematic. However, like any powerful tool,
a master craftsman can wield it carefully and precisely to achieve great ends.
This post is targeted at new developers (or non-engineers) who are looking to understand a git workflow in the context of a drawing a straight line, a linear commit history.
Components of the line
You can think about the code as a series of
commit objects, that are organized linearly and which
stack on top of one another. If you were to start the codebase over, and apply each commit
sequentially, you'd end up with exactly the same codebase we have now. Each
commit has a parent
commit. This is useful because you can think of each
commit as just being a difference between
two states: the state of the codebase at the parent commit versus the state of the codebase at the
This difference has a more formal name in git: a
diff. The concept of a
diff is widely used,
and can refer to any difference between file(s) in one state and the same files in a different
state. Git even has special commands for viewing differences:
git diff and its variations, but we
won't go into those at the moment.
commit is given a unique hash code, which is a bunch of letters and numbers, like
83ddf3be77b58395d2f00b7f51a7cec8bafd2ac8. Because these codes are so unique, you can often refer
to them by just the first part of the code, like
Linear commit history
In the following diagrams, each
commit will be referred to with a one-letter code. Each machine
references the commit history. You can think of
staging, and the two versions of
master as pointers into the history.
production represents what our users are currently seeing in production.
staging is the
code about to be deployed.
master is the code we are working on currently in development.
A <- B <- C <- D <- E <- F <- G ^ ^ ^ | | | production staging master on remote master on your machine
Great! So we can see here that
production is behind
staging, which is what we expect. So if we
were to deploy what's currently on
production, the diagram would now show both
A <- B <- C <- D <- E <- F <- G ^ ^ | | staging master on remote production master on your machine
Now you make several new changes to the codebase on your local dev instance, and you commit them
master. Those commits have the codes
L. The commit history now
looks like this:
A <- B <- C <- D <- E <- F <- G <- H <- I <- J <- K <- L ^ ^ ^ | | | staging master on remote master on your machine production
Great. Now your local copy of
master includes the changes you made -
The master step is to issue a
git push origin master to get these 5 commits into a central place
where they can be accessed by other developers and by other systems.
A <- B <- C <- D <- E <- F <- G <- H <- I <- J <- K <- L ^ ^ | | staging master on your machine production master on remote
Now you want to update the staging environment to the latest version of
master, so you'll go to
your internal deployment tool and start the deployment process. The result will be that
now points to the commit history at commit
L, and the staging environment will showcase the newer
version of the codebase.
A <- B <- C <- D <- E <- F <- G <- H <- I <- J <- K <- L ^ ^ | | production master on your machine master on remote staging
If you want to also put those commits into production, you'd now issue another deployment that will
production pointer to master:
A <- B <- C <- D <- E <- F <- G <- H <- I <- J <- K <- L ^ | master on your machine master on remote staging production
The linear commit history described above works great when you're just working in
master, but what
happens when you want to make a big feature that has a multi-week development timeline? You don't
want to make big features in
master, because you'd prevent yourself from making any small bugfixes
or tweaks in
master, so you create a new branch. Think about the main commit history diagrammed
above as the trunk of a tree, then it makes sense how you might branch off that trunk.
Let's say the new branch is going to be a new interface for the app that uses smoke signals to
communicate with old-fashioned users, so we'll call the new branch
smoke-signals and we'll create
the branch with
git checkout -b smoke-signals (do this while
master is checked out, so your new
branch will start at
master). Detailed versions of this command (and its caveats) are below. We'll
focus on the high level diagram here.
Working from the diagrams above, we hide the earlier part of the history to make it easier to read.
Your new branch
smoke-signals will start at the same commit as
J <- K <- L ^ | master on your machine master on remote staging production smoke-signals
That's exactly what we expected. You start working on smoke signals, and make your first commit that
can show a "hello" signal. You commit this and it has commit-id
J <- K <- L <- M ^ ^ | | | smoke-signals | master on your machine master on remote staging production
You make a few more commits into the
J <- K <- L <- M <- N <- O <- P ^ ^ | | | smoke-signals | master on your machine master on remote staging production
Suddenly the support team emails you and notifies you of a high-priority bug that you need to fix.
You switch branches from
git checkout master), and get to work on
the bugfix. (Note that to switch branches you should have a clean working tree, which means you need
to either commit your work before switching or stash it) When you commit the bugfix it is given
a new unique ID
Q is based off of master, so now the commit history has two very distinct
branches, indicated in the diagram with
+. Also note that
master on your machine is now at
smoke-signals / <- M <- N <- O <- P J <- K <- L + ^ \ <- Q | ^ | | | master on your machine | master on remote staging production
You want to get the bugfix
Q out to production right away, so you first
git push origin master
to get your commit into the remote server.
smoke-signals / <- M <- N <- O <- P J <- K <- L + ^ \ <- Q | ^ | | | master on your machine | master on remote | staging production
Then you issue a deploy to update the
production pointers as well. After these
steps, the diagram looks like this:
smoke-signals / <- M <- N <- O <- P J <- K <- L + \ <- Q ^ | master on your machine master on remote staging production
Awesome, your bugfix is in production and now you can go back to working on
git checkout smoke-signals to get back to your project, and write some more code and get
smoke-signals ready for primetime. You issue a final commit and it has an ID of
smoke-signals / <- M <- N <- O <- P <- R J <- K <- L + \ <- Q ^ | master on your machine master on remote staging production
In the meantime, your colleague has been working on
master and created another bugfix
smoke-signals / <- M <- N <- O <- P <- R J <- K <- L + \ <- Q <- S ^ ^ | | | master on remote | master on your machine staging production
This is starting to get complicated, so take a deep breath and look closely at the diagram. Your
goal in the master three steps will be to get
smoke-signals on production.
First: update your local copy of
master to what your colleague has with
git checkout master and
git pull --rebase origin master. Note that you might also choose to run
git fetch if git reports
that your branch is ahead of origin/master by N commits (this is not actually a bug or an issue, so
you can skip the fetch if you choose).
smoke-signals / <- M <- N <- O <- P <- R J <- K <- L + \ <- Q <- S ^ ^ | | | master on remote | master on your machine | staging production
Second, and this is important, you will stack
smoke-signals on top of
master by first going
to the branch with
git checkout smoke-signals then issuing a rebase with
git rebase master. By
doing this you essentially detach your branch from where it departed from
master and rewires the
diagram so your feature branch is now sitting on top of
$ git checkout master $ git pull --rebase origin master $ git checkout smoke-signals $ git rebase master # .. Resolve any conflicts followed with `git rebase --continue`
/ <- (nothing here!) J <- K <- L + \ <- Q <- S <- M <- N <- O <- P <- R ^ ^ ^ | | | | master on remote smoke-signals | master on your machine | staging production
During a rebase, you might have to resolve conflicts if any files changed in one branch were also
changed in the other branch. This is totally normal, and you should carefully consider the conflicts
to make sure they're resolved correctly. After you resolve all conflicted files, type
--continue to continue the rebasing process.
Now you'll want to move your
master to be at
R. This is the only time you should issue a merge
command. Be sure to use the
$ git checkout master $ git merge --ff-only smoke-signals
/ <- (nothing here!) J <- K <- L + \ <- Q <- S <- M <- N <- O <- P <- R ^ ^ ^ | | | | master on remote smoke-signals | master on your machine | staging production
master is at
R, and you can use the same deploy process described above to get
your new smoke signals feature into production.
J <- K <- L <- Q <- S <- M <- N <- O <- P <- R ^ | smoke-signals master on your machine master on remote staging production
Notice how you've now flattened the commit history into a single line again. Exactly what we wanted.
The linear commit history is a useful tool for managing a complex codebase. We have found it scales well with growing codebase and engineering team, and recommend the linear history as a useful strategy for anyone considering ways of managing workflows.
Page 1 / 9 »