It’s been over 5 years since Mark Andreesen published his famous WSJ article, “Why Software Is Eating The World,” and in the years since, we’ve certainly seen the technology, venture and growth capital worlds evolve. While not every company or prediction mentioned proved successful, many did, and the prescient headline certainly has come true. Every day software appears to creep further into our world. Most consumers feel software’s impact daily and technology’s impact continues to grow within the business world.
Driven by the proliferation of connectivity, cheap computing power both in our pockets and in the cloud, and a robust API/cloud services ecosystem, it continually becomes cheaper and faster to bring whole software products to market. As a result, we’ve seen software explode across geographies, industries and marketplaces. In the B2B world, where I focus, we’ve seen software begin to move beyond IT and desk work and into the daily lives of workers wherever, and in whatever form, they may work. Data and connectivity have improved the capabilities of workers, and have reduced cost and improved yield through better information flows and automated capability. Entrepreneurs are broadening penetration of tech into an ever widening set of opportunities, and corporations from across the spectrum have taken notice. We’re seeing more investments and acquisitions from non-traditional tech buyers from ever before, with folks like GM, Kellogg’s and even Sesame street jumping into the game. It’s certainly been a good few years for the entrepreneurial tech scene.
When Pitchbook released its year-end summary of venture activity, it gave us a glimpse into how the venture market has reacted to these dynamics. As we pretty much all know, it’s been a heavy few years of venture investment. In fact, the past 10 years has seen steady growth in both the number of investments and the total dollars invested, although the latter dipped before rebounding strongly post-recession. Some of this growth was certainly driven by broader market dynamics (e.g. little return elsewhere with a “rich” stock market and interest rates near zero), but the investment opportunities wouldn’t have been there had it not been for the continued proliferation of software and the internet. As investors saw the potential of software to alter the trajectory of growth and fundamentals of profitability across a host of industries, money began to pour in. Venture fund-raising has been strong and continues to be strong against this background, and the past few years saw more and more “tourist investors” getting into the game as well (especially at the earliest and latest stages). With more money flowing into the latest stages, traditional growth investors pushed earlier and each part of the venture ecosystem saw significant growth. Traditional venture sectors like SaaS and Consumer Internet exploded, as did burgeoning markets like “On-demand,” “E-commerce” and Digital Media. However, nothing lasts forever, and the overheating finally impacted the market in late 2015, as the venture market hit it’s top.
In the back half of 2016, the contraction began. Since that point, we’ve seen fewer deals being done by mutual funds, hedge funds and non-traditional angels and it appears that traditional venture and growth capital investors are returning home. Across the various stages of the market, investors are generally returning to their preferred stages of investment and valuations are beginning to return to a more typical level. Here too there may be other market forces at work, but I think we all know that the venture market got a bit too far “over its skis.” Too wide of a set of companies raised far too much money at far too high valuations and under expectations far too heavy for their market potential. The up-rounds eventually ran out, while the extreme-multiple acquisitions didn’t materialize as fast as needed. Certain companies and sectors are in bad shape as investment declines, while others remain strong and after a bit of rationalization, should continue to grow.
But a cursory glance at the total market doesn’t give us the full picture. The first set of charts below highlight the slight difference in the shape of the dollars and deals chart above. While the number of deals has decreased quickly, the total dollars invested have reduced more slowly, concentrating capital into fewer companies. When you look at early-stage venture in particular, there has been only a slight decline in dollars invested as number of deals has declined to nearly 2010 levels. When we look at the charts below, we can see the median round size about-double from a 2010 low, and back to pre-recession highs. The average VC round size highlights this concentration even more sharply, with bigger deals driving averages much higher than ever before. The early stage has reached a high-point, and late stage averages have doubled off of their pre-recession high. Some of this capital expansion at the growth stage is likely tied to companies staying private longer and bringing in more capital at later stages, but the heights of the early stage show that rounds are getting done at bigger dollars than ever before. A Pitchbook search confirmed this for me, when I found that the average Series A was $8.4M, and the average Series B was $22.4M – both well above the $5-6M A and $10-12M Series B most of us think of.
When we look at this next chart, showing venture fund-raising trends, it becomes clear that this trend is unlikely to slow. US venture and growth capital funds hit all-time high fund-raising levels in 2016, meaning that there will be a lot of capital to invest over the next few years. Given that we’ve seen the number of deals (and therefore funded companies) drop in the past few years, in particular at the seed stage, it’s likely that we’ll continue to see a concentration of capital into fewer companies/deals over the next several years.
US Venture Capital Fund-raising by Quarter
So how does all of this data correlate to the beginning of this post, where I suggested that it had become cheaper and faster to bring whole products to market? In some ways, it actually correlates quite nicely. With less expense to build, test, iterate and scale software, we’ve seen it penetrate a broad range of new markets and opportunities. Delivering software that improves intelligence, capability and automation has the power to change the paradigm of efficiency and performance. Connectivity and cheap computing can take this value to new heights, and the ability to efficiently distribute software over the Internet can allow a company to quickly scale. Of course, it also means that competitors, both upstarts and adjacent players can quickly follow suit to chase employee talent, customers and the good market opportunity. Where a company is well positioned to take on a big market opportunity, with defined processes, pouring as much “fuel-on-the-fire” as possible makes a ton of sense. The faster it can grow to scale, the harder it will be for others to attack it and the faster it’ll reach it’s potential. For venture investors with the opportunity to invest behind a company in this position, it’s certainly reasonable to invest as much as is prudent. For the company, if they are truly confident in the opportunity, it also becomes a clear decision.
However, there are many costs to a big growth round that hinder it’s viability in other situations. First, for all but a very small sliver of companies, financings take time. They can distract from the CEO’s focus on the business and can take months to put together. Second, and perhaps most significant, is that they come with expectations – an immediate expectation of high-growth, an expectation of exit within an appropriate time-frame (usually 3-7 years) and with an expectation of that exit generating 3-10x return for the investor (depending upon stage). Lastly, it typically means a relatively large amount of capital to the size of the business and fairly significant dilution. The capital needs to be sufficient to cover the expected cash burn that will generate out-sized growth, and must be sufficient that the investors own a “meaningful” stake in the business. Usually this means 20-30% dilution for the existing shareholders, who see the opportunity to own a bigger piece of pie by owning a smaller piece of a much larger pie. For companies ready to grow at the pace growth capital dictates towards the market opportunity growth capital and the valuation hurdles it typically ascribes, growth capital can be a wonderful tool.
For other companies though, and for many at a particular time in their lifecycle, a big venture or growth capital round might not be the right fit. In a world where products can be built, scaled and distributed more cheaply than ever before, a smaller amount of capital, with more flexibility is sometimes the better fit. In the B2B tech world, where a company can begin to generate revenue fairly early in it’s lifecycle, we see this situation frequently. Often companies in this situation have raised a couple of million dollars to build a focused product that addresses a specific opportunity within a broader market. On the back of that first investment, they’ve found product/market fit and have begun to generate single-digit millions of revenue and nice growth, with limited cash-burn that allowed for runway to prove-out the opportunity. They want to expand and grow more quickly, and/or expand their market opportunity but haven’t yet tested the market to prove that those expansion strategies will work out. Because the base-business is working well, they need a smaller investment, with greater flexibility, to prove it out. However, what are available are ever-bigger growth rounds. Given that they haven’t yet hit proof points in their expansion strategy, most VCs are reluctant to invest (especially at today’s bigger round sizes) and if growth capital is available, the company is reluctant to commit to the expectations of that growth round.
For these companies, the expectations associated with the round might create more challenges than the extra money in the bank can offset with advantages. However, they aren’t bad companies, but often just the opposite. In many cases they’re delivering value to their customers, are headed toward profitability and are growing nicely – just not quite as fast as a VC would require. But if the company isn’t a great candidate for a growth capital round, there are few alternatives that can bring capital into the business and the company can get stuck in a bit of a financing “no-man’s land.” Without any form of growth capital, the company is forced to move further towards profitability and grow only from organic cash-flows, rather than investing behind the business and moving further from profitability for a period of time. This likely means slower growth and product development trade-offs, which will make it difficult to raise a growth round when they are eventually ready. That can be a difficult cycle to break and can limit the company’s ability to meet it’s true potential.
For all of the innovation in the tech ecosystem, there has been little innovation in the financing options for tech companies. A hole has developed and as rounds continue to stay large, and perhaps grow larger, that hole will likely grow. To fill these gaps and help companies reach their full potential, there is need for new financing alternatives that help fuel growth through smaller and/or more flexible infusions of capital. The good news is that we’re starting to see new come to market and entrepreneurs beginning to embrace these new financing formats. I have high hopes that moving forward we’ll see further innovation in the financing side of the technology world, just as we’ve seen it on the product side. Venture and Growth Capital will continue to be great financing options for some companies, but it’ll be exciting to see what alternatives arise to help every tech company maximize its opportunity
Lately, private market valuations (and dollars invested) are high – higher than they’ve been historically and higher than the public markets. We’ve likely all shouted “holy shit” when we’ve seen the eye-popping recent valuations of companies like Uber and Slack, but in turn, we’ve all likely muttered “holy shit” when we’ve heard about their revenue growth rates. Big dollars help drive scale, but there is more at play here than dollars alone. The rapid ascension of mobile and the development of a broad set of cloud services have generated a product ecosystem that makes it easier than ever to develop, and faster than ever to deliver, a product that addresses and scales quickly into the mainstream. Whether or not the financing environment changes, and whether or not these valuations prove justified, we’ll likely continue to see new products developed at an extraordinary pace, businesses scaled at an unprecedented rate, and markets overturned faster than incumbents can counteract. Uber, for example, was founded in 2009 and just 5 years ago, was running tests in San Francisco; it’s now reportedly in talks to raise at a $50B valuation and it’s hard to find someone who hasn’t at least heard of the service.
In Crossing the Chasm, Geoffrey A. Moore popularized the term “Whole Product,” and considers it a critical element to mainstream adoption of a product and overcoming an incumbent solution (I’ll now attempt to outline that, but I’m sure will butcher it). While early adopters are comfortable leveraging a partial-product, and filling-in the gaps themselves, the mainstream market isn’t. The Whole Product is the entire solution that must be delivered to make the product work, and to encourage a mainstream customer to replace an existing solution with the new product, no matter how much better performing the new solution may be. For example, let’s say you were using a charcoal grill, and propane grills came to market. The propane grill was a step-up in convenience and you wanted to get one. With a new shape, a new fuel-source and a new type of fire/heat to cook on, you’d need a suite of tools (like propane tanks, grill covers) and some new training. If the new grill came only as is, without the propane tanks installed and a grill cover included, the mainstream wouldn’t be as likely to switch over. With all that packed in, it makes it much easier to make the swap. Software, traditionally, wasn’t much different. For an enterprise to install a new system, they’d need servers, someone managing the servers, terminals, software packages downloaded on employee desktops, therefore desktops, etc. If a new software platform required new hardware, training, etc. then you’d need to have it all available before the “whole product” could be sold.
In today’s world, cloud services and mobile devices have shifted the paradigm of product development and delivery through the ability to deliver whole-product, nearly from launch. On the back-end, cloud services enable the spin-up of compute resources instantaneously and at scale, without any real capital expenditure. API Services, like payments processing, communications infrastructure, and more, give a company the ability to deliver all elements of a product quickly and with little additional development and upfront cost. Open source software gives developers a boost in getting product to market, and access to both 1st and 3rd party data sets allows for greater product intelligence with often limited (and sometimes no) integration work. On the front-end, we all have mobile devices in our pockets. With cellular and wifi proliferation, products can reach beyond the office or home opening new business opportunities and for customers, greater engagement and usage. The data associated with a device can tell us much about a person’s environment – not only where they are, but that then can lead us to weather, proximity, speed, direction, etc. Lastly, marketing virality can reach new heights as we’re always connected to learn about, or download, a new product.
Incredibly valuable businesses are being built, incredibly quickly, leveraging this “whole product” paradigm. As it’s both about as popular as you can get, and a key beneficiary of the mobile/cloud dynamic, let’s take Uber as an example to help illustrate the point. I’ve gone through a very simplified customer lifecycle and ride experience detailing the Uber experience vs. “what could have been.” The latter is a look at what the product would be like without cloud services, APIs and mobile devices, although I’ve tried to consider other modern elements (like connected credit card readers). This also doesn’t take into account things like driver on-boarding, complex marketing, surge pricing, etc. but that just keeps it simpler.
The experience on the right seems much like a traditional car service, and although convenient, just isn’t that magical Uber experience. There are lots of inefficiency points and more importantly, a lot of added cost to realize the product vision. Apart from the car itself, with Uber, the only tool that both the driver and passenger each need is a mobile phone, which they most likely have anyway. In the “what could have been” case, a customer needs a computer and internet connection, but there is a lot more infrastructure on the service-end in particular. A local dispatcher needs to connect cars and riders, and then needs to input details into a software system which communicates with the customer and accounts for the ride, and the driver needs an in-car radio, connected credit card processing machine and either a GPS or an intimate knowledge of local routes. There is a lot more cost and a broader product to “sell” to drivers in particular, which would need to be overcome to scale that business (vs. the astronomical rate of scale of Uber today and many drivers that participate but otherwise wouldn’t).
This is not a structural change limited to ride-sharing or consumer software, but open to nearly all new “tech-enabled” businesses. It’s helping to disrupt old-line businesses with longstanding problem sets, previously far more difficult and costly to upend. It’s revolutionizing (and has the potential to far more further revolutionize) productivity, collaboration and efficiency across the enterprise. Access to new data generated internally, from cloud platforms and from mobile devices is changing the way that brands interact with consumers (an area we’ve spent a lot of timing investing), and new businesses are being built and scaled to address new levels of data access and developing communication paradigms.
And so, in conclusion, whether or not the current financing situation holds, I believe we’ll continue to see companies that leverage mobile dynamics and cloud services / APIs to rapidly deliver whole products that scale at incredible rates. By leveraging the unique data and connectivity of mobile devices, and the near unlimited scale of the cloud, entrepreneurs will continue attack market incumbents with lightweight solutions that offer customer’s value from sign-up. We’ll continue to see massive disruption in markets both big and small, and in those larger markets, businesses that grow at astonishing rates. Over time, customers will come to expect this dynamic and the businesses that deliver on it will offer remarkable returns for both entrepreneurs and their investors. Whether or not you believe we’re in an investment bubble, we should all be excited by, and investing into, start-ups that build leveraging the (very real) whole product paradigm.
It’s been far too long since my last post, which looks to be from December 2012, and I’m finally working on a new post.
A lot has happened since the 2013 New Year. At the time, I was Director, Operations at Yext, but in March 2013 I joined Comcast Ventures as a Senior Associate. In my 2+ years at CV, I’ve worked on numerous investments, held Board Observer seats and worked closely with our management teams, and worked to develop deep investment theses with my colleagues. I was promoted to Principal earlier this year and personally, I was married to my wonderful wife Meg in June 2013.
A few months ago I decided that I needed to start blogging again. The past two years have been a period of incredible learning for me, and I’m now feeling like I can put those learnings into coherent set of writings. Versus my prior posts, which were more operating-focused, newer posts will be much more about market analysis – it’s what I do now. I hope to post about overall market trends, as well as individual sectors of interest. Specifically, my areas of interest are in enterprise software and platforms that leverage data to generate personalization, automation or insight directly to the sales or service reps who are the Company’s front-line with customers. We’ve made several investments into marketing tech platforms that fit that mold, but I believe there will be opportunities developing in customer support, operations and logistics platforms in the coming years. There are efficiencies to be realized, although much of the opportunity boils down to improved customer experience; but I’ll dig-in on that in future posts.
I’m hopeful to publish my first new post in the next few days, and I’m just putting the finishing touches on it. At the least, this post should keep me honest about getting it out soon. More to come…
A few weeks ago Roger Ehrenberg of IA Ventures wrote a great post on data driven planning and execution, called Plan Well, Execute The Plan. In that post Roger outlines the benefits of “keeping one’s head down” and the process of executing off of a well formulated, data-driven, plan. His post is great, and so I won’t repeat it, but the process is essentially to formulate a hypothesis and associated tests, run the tests, analyze the data, implement changes, formulate a new hypothesis, and repeat. He also outlines the benefits of a hypothesis-driven process, which I believe boil-down to focus and smart decision making. Since reading Roger’s post, I’ve been thinking quite a bit about, and am writing this post to expand upon, why this process really works: Why does hypothesis- and data-driven development really help drive focus and smart decision making?
Much like a sprint-cycle deters distraction for a development team, a hypothesis-driven model helps to manage and deter distractions for the group that utilizes it. Sprint-cycles help developers to focus on the critical features as determined in the beginning of the sprint, while deferring other initiatives to the backlog. Upon completion of the sprint, options for the next sprint are reviewed and the sprint is set based upon the current priority. Often, backlog items that once seemed like high priority, or that otherwise would have served as a distraction, are no longer relevant and are either leap-frogged in the queue or pushed out entirely. Sometimes these features remain high-priority, but if pushed out often it is because the feature is no longer requested, because the team’s focus has shifted, or because an alternative has been discovered with additional time for thought.
In much the same way, a hypothesis-driven culture helps to avoid having (as Roger puts it in his post) “good people and companies knocked off kilter by glamorous, shiny stuff happening in their external environment.” The reality is that these are the types of things that can kill a company by a thousand tiny cuts. I discussed some of this in my post on Parallel-Process Product Development, but focus is important because that “shiny stuff” can continually distract from core product development. The problem is, that “shiny stuff” can also be so difficult to ignore; a competitor releases a press-grabbing feature, a major customer makes demands ahead of a contract, or a strategic brainstorming session yields lots of exiting new ideas. These distractions can often take the company away from its focus, and just as a sprint-cycle maintains focus, a hypothesis-driven culture keeps the focus of the organization on the key hypothesis and the related tests. If after completion of the test, the feature/product/direction is still deemed relevant, then it can be tackled in the next set of hypotheses and tests.
Smart Decision Making
When a company is first founded, its generally done on the back of a hypothesis or a set of hypotheses. Its not always referred to as such, but it usually is just that – an idea that you think will work, in a specific marketplace. However, as the great Steve Blank says, “No plan survives the first contact with customers.” Inevitably, as you go to market, things change and your original set of hypotheses are either proven or dis-proven. The more formal your processes around these hypotheses and the data generated, the smarter your decision-making going forward will likely be. Each successive “test” brings about a set of data that you can use to formulate the next test, ensuring that you are utilizing the “focus” discussed above to your best advantage.
Without hypotheses, testing and data, the only way to determine your direction is by gut and instinct. Sometimes your gut will lead you in the wrong direction, sometimes in a tangential direction, and at the least, if you do head in the right direction, sometimes you’ll zig-zag you way there. Data has the wonderful characteristic of never having opinions and if you extract the right data for your test, it’ll direct you accordingly (it could, of course, have biases, and the individual reviewing the data has opinions but lets assume perfect data and unbiased analysis for now). When you test multiple elements, either simultaneously or over time, often some of your hypotheses prove true, while others are dis-proven. As a result, the data will often dictate a scenario where some of your forward direction remains rooted in what you’ve proved, and where other elements are shifted – the classic pivot. The definition of a pivot point is “a point upon and about which something rotates,” and a proper data-driven, hypothesis-based process helps to determine which elements to pin, and which to shift about that pin. Once you rotate, you can reset, and determine the next set of hypotheses and tests, moving forward with focus from there.
There are likely additional benefits to a hypothesis-driven process, but these are the ones I’ve personally experienced. Additionally, while most easily relatable to product development, hypothesis-driven processes are useful in a number of business areas. Sales and marketing are great examples and its applicable in many additional ways. I’d love to hear about other people’s experience utilizing hypothesis-driven processes and welcome related stories both positive and negative.
One of the biggest start-up lessons I’ve had to date is the power of dedicated teams, and a parallel-process approach to product development. Some might refer to this as the “portfolio approach,” but its really more of a component of that approach. The important element is that you have distinct, dedicated teams working on your “portfolio” of projects/products. In the past few weeks, I’ve had 3 or 4 separate discussions with different individuals on the power of this development methodology, and I thought it made sense to put together a post on it. Some of this material was covered in Collaboration Domination (Part 1 and Part 2), but this post gets a bit more detailed with respect to parallel-processing.
When I first moved into product management it seemed like we we’re treading water on our most important projects. We had good vision, a detailed product pipeline, a small, but excellent, team and discipline around the scrum process. However, we could never advance much beyond a few small elements on our major initiatives. For us, those initiatives were an online, self-service product (we were historically an enterprise sales-based organization), and a set of efficiency and automation tools for our analysts. The two projects were critical to our advancement and growth, but we just couldn’t make progress.
I always knew that distractions we’re causing the bottlenecks, but it wasn’t until David Wolfe, a product development veteran, began working with the company and revamped the team did I learn how to manage the problems. When he started, David took one look at our team and knew it needed to be restructured. Given that we were a relatively small-group, in a young organization, we were operating as a single-unit. In doing so, we attempted to complete components of the the two critical projects alongside bug-fixes and short-term projects (usually existing product revisions and client-specific requests).
First, David immediately erased the distinction between “product” and “engineering” and made sure we all understood that we were overall, one cohesive, cross-functional, “product development” team. Then, he split the larger team into three very “slim,” but very focused, project teams. As the sole product manager I operated across all of them, but otherwise team members we’re allocated to specific project teams, focused on specific goals. The online team had a designer, front-end developer (also a UX specialist) and couple of back-end developers. Our internal tools team had two back-end developers (one of which had some front-end skills as well), and our bugs/short-term/special situations team had an in-house developer and an off-site developer. The teams were small, but ultimately contained the required skill-sets, and were only allowed to focus on their respective projects.
Within weeks it was clear that the difference was remarkable; our core projects we’re beginning to fly. In less than 2 months we had built an entirely new, and far superior, web experience, implemented data tracking and analytics, and would continue to advance that project nose-to-tail for the next 9 months. Our internal projects were advancing far beyond our expectations, and bugs we’re getting fixed as needed. It become extremely clear that our challenge prior to restructuring was centered on the short-term requests. In our old model, these would come to dominate each sprint, as they were often handed down from management and sales as important and impactful. Additionally, they were viewed as only “short-term,” and not something that would distract and impact our development process beyond a few weeks. However, the big problem was that there was always another short-term project on the horizon, and ultimately these projects came to eat all of our resources.
Given this dynamic, our efforts we’re not appropriately split between core projects and short-term projects. For example, if a short-term project was worth 40 points of effort, and we had a sprint with 50 points of resources, 40 would be taken-up by that short-term project, leaving only a small sliver of time for the longer-term projects. By parallel-processing the projects across 3 teams, we limited the resources for every project, but in doing so resulted in far fewer distractions. For example, that 40 point short-term project would now need to be completed by a team that maybe only had 10 points of bandwidth, making it a 4 sprint project. Meanwhile, the online team had 20 points of bandwidth and could use all 20 points, each sprint, to focus on the goal of advancing the online product.
While the same principles of “parallel-processing” across distinct teams can be applied to other functions as well, its extremely effective for product development organizations. I fundamentally believe that even for the smallest start-ups it would be helpful to think in this manner, and its critical as the company grows. The primary difference will simply be the number of teams, but each team should strive to be focused on just one project at a time. The projects can change, and the team members can be moved around (not too much, but rotation is OK), as long as at any given time, there is only one project in focus for each project team. Below is a summary of a few of the critical elements of, as I call it, parallel-processing:
1. “Self-contained” cross-functional teams: Cross-functional teams are an important element of this approach. Even if team-members need to split time (not ideal and should only be done when truly structured), it is important that all skill-sets are represented. First, the dedicated resources ensure that distractions are less prevalent. Second, cross-funcational collaboration is critical to rapid iteration and decision-making. I can’t tell you how many times we hit a roadblock, even mid-sprint, and we’re able to overcome it with a few hours in a conference room white-boarding, discussing and ultimately creating a plan with (what we felt was) the best combination of design, usability and ease-of-development.
An earlier post, Collaboration Domination, Part 2 discusses this further
2. Consistent and Shared Vision: While the team should have the ability to iterate and build as they determine is best, the entire organization should share the same vision for the company, and should be in agreement on what projects/products are most important. Since resources to any one project are going to be less than the overall available resources, without buy-in across the company (particularly across the executive team, but in the best situations everyone is on the same page), the teams are likely be continually distracted, often redirected, and frequently pulled in different directions by different execs/functional teams.
3. Insulation: For this method to truly work, it needs to be adhered to. If the different product/project teams are continually bounced around, or their progress is hindered by interruptions, one-off requests and short-term requests, it will fail.
1. Product Focus: It is a well known tenet of start-up building to focus on one-thing and to keep distractions to a minimum. For every organization, saying “no” to non-core projects is always an option, but the parallel-process approach makes it much easier to limit distractions. In that approach, short-term projects are limited to only a small pool of resources and can’t distract from the primary goals of the company. An added benefit is that if needed, there is room for testing new opportunities and non-roadmap product features both without distracting from the core roadmap and without having to unilaterally say no.
2. Team Focus – Over time, when a team is focused on a on single project, they become extremely focused on that project. I saw it clearly when we restructured. Rather than constantly “switching-gears,” the team was able to throw themselves completely into a single project. Over time, it made it easier to think through problems, more seamless to work together, and made for a more cohesive unit.
This isn’t to say that the team must be static forever, and indeed some turnover and change is good for sparking innovation and creative thinking, but it shouldn’t be frequent. Just like any sports team getting better and more cohesive as they continue to play together – a QB and WR becoming lock-step, a hockey line passing by instinct or a SS and 2B turning a double-play with perfect timing – assuming they mesh, the longer they work together, the quicker, more efficient and better a product team becomes.
Its been a little while since I’ve written a post. As some of you know, I’m currently (as my friend Darren would say) a “free agent” and spending most of my time figuring-out what’s next. However, I thought I’d take a bit of “Hurricane-time” to write a quick post on a pen. Yes, a pen … but not just any pen – the Pen Type A from CW&T Studios. It started as a kickstarter project (a very successful one), and its not just a really cool pen, but also provided a lot of great lessons in its campaign.
Lesson 1: Be Passionate About What you Doing
CW&T is the design shop for Che-Wei Wang and Taylor Levy and they introduced the Pen Type A kickstarter campaign in July 2011. The pen was based upon the Hi-Tec-C pen, a pilot pen that Che-Wei and Taylor had a passion for. It was this passion that was transcendent, and led the campaign to success in rapid fashion. Just 12 hours after launch they met their initial funding goal and by the end of the campaign they had crushed it. Personally, I came across the campaign a few days after launch and despite having never seen a Hi-Tec-C pen, was so impacted by their passion, that I purchased a basic model of the pen. After understanding what the passion was about, I decided to put in my kickstarter pre-purchase.
The passion that Che-Wei and Taylor had for their project was inspiring for those with the same passion and contagious for others. Ultimately the campaign not only blew-away the goal, it exceeded it by over 100x. The product they ultimately created is a beautiful and well-built writing tool that more than matched their original designs and lofty goals.
The important lesson here is to be passionate about what you do. If you’re passionate it will be clear and will shine through in your marketing, pitches and other interactions. It will be contagious both to those with similar interests and to others: even those who may not have been aware of your market and product, or even the problem your solving will be infected as they start to research and better understand why you are so passionate. Additionally, if you’re passionate, it will shine through not only in your original plans, but also in the product you ultimately build.
Lesson 2: Your Early Customers are Investors Too
Unfortunately for CW&T, Pen Type A proved far more difficult to manufacture than originally estimated, and the team met challenges throughout the process. By the time the last pens were delivered, it was well over a year since the campaign had ended. However, despite a far delayed shipping cycle, the team was able to maintain support. Having built an enthusaistic “fan-base” with their passion (as noted above), they were able to keep the enthusiasm high by communicating throughout the process. All in, the Company sent 31 updates and answered thousands of e-mails and user comments. They kept the community informed, and made everyone feel like a part of the team – enough so that not only were there few complaints, but kickstarter backers even showed up at the CW&T studio to help!
Like many other Kickstarter campaigns, CW&T underpriced their initial production run for early backers. The Pen Type A cost $50 when pre-ordered through Kickster, was intended to be $99 once released and is now $150 through their online store. It was important for them to charge for the valuable product (at least at cost), but rather than making a large profit, for the initial product run it was more important to recoup their cost to get up and running – to get their product in people’s hands, start building a user-base, begin gathering feedback and getting their processes in order. To that end, they were able to do that in spades and their next generation is improved, faster to ship, and indeed more expensive.
This lesson is a bit easier to visualize with Kickstarter, as backers are a combination of investor and customer, but its important for any business. Its critical to remember that early customers are a type of investor as well. They are taking a “flyer” on your business; perhaps because it appealed to their passion, perhaps because it appealed to a problem they have, or perhaps for another reason, but whatever there reason they took a chance. You need to keep them close to the business and need to prove value to them, because no matter what, they can always go back to what they were doing before. If you are big and established, you can price and provide customer service for optimal profit and capture optimal value, but when you are young and unproven, you should aim to exchange some of that profit and value for rabid customers that evangelize, provide feedback and help you grow.
Lesson 3: Good Design is Critical
Part of the appeal of Pen Type A is its design. Not only does it look very cool (as the picture below shows), there are elements that make it a really awesome product. CW&T spent significant time ensuring that the pen felt right in the hand, created a vacuum seal to prevent leaks and provide for better use (plus the pop sounds cool), and was manufactured properly and to last. As a user I can say that the pen is a pleasure to use and looks great on my desk. Its something I want to use and it’ll be a long time before I buy another nice pen. In fact, the only pen I’ll likely buy is this one, as a gift for others or maybe another one for my office.
The lesson here is pretty simple: good design matters. For software businesses, things are a bit different than a hardware product, or a tool (like a pen), that can’t receive updates. However, design is something that is often overlooked in a rapidly iterating business. However, as Pen Type A proves, good design can keep you loyal and make you want to use a product. Even if they both solve the same problem, a user is more likely to use the better designed product – the one that is more intuitive, easy to use and quite frankly, fun. One doesn’t need to look much further than Apple to back-up that theory.
All in, I’m really enjoying my Pen Type A and I thanks CW&T for making it. Its not only a great pen, but also crystalized some valuable lessons for me. I’m stopping at three for now, but I know there are even more lessons buried in there that I’ll think about each time I use it.
A few weeks ago I was having dinner with my fiancée, and we started talking about some of the leaders of her company. She works for a large Fortune 500, and so there are many leaders, at many different levels. Specifically, we started talking about her boss, and her boss’ boss and how terrific they are, especially when compared to many of her friends’ bosses and others that she has come across. As we talked, the pattern that emerged was one of “enablement.” Her boss, more than any other she has come across, made it her business to be supportive of her group – and not just in spirit, but by “clearing the pathway” so that each member of the group could perform the tasks that they were most adept at. As a result, the actions of her boss were leading to a more coordinated team, smoother operations, and often better performance.
While that conversation sparked my desire to write this post, the power of enablement is something I’ve thought about for quite some time. Others have too. For example, back in July, Zach Bruhnke published a great post called You’re not the CEO – you’re the Fucking Janitor, which then incited Jonathan Strauss to post his response, You’re more than the Fucking Janitor: Thoughts on Startup Leadership. Both were great posts on start-up leadership, and as I read them, saw the very common thread of “enablement.” At the earliest stages of a start-up, that might mean cleaning-up, ordering computers, making sure payroll is running smoothly, and overall just keeping the product development team happy – the dirty work. As the organization grows, so do responsibilities, and the CEO might begin dealing with customers, board members, investors, or organizational challenges. As a company grows towards maturity, responsibilities can shift even further. The reality is that “enabling” can mean a lot of different things depending upon the circumstances and the individual – for instance, it could mean that you do what you do best (maybe its sales, maybe its coding) while giving others the resources and runway that they need to perform at their best. Maybe its making inspiring speeches, maybe its meeting with investors to generate investment capital to continue operations (and payroll) or maybe it really is just taking out the garbage. Whatever it is, there is a common thread in each case, and that is empowering those on your team to perform at their best, and their least distracted.
As I write this I should note that my time as a manager has been limited, however in my decade of work experience, I’ve worked under dozens of managers, and a number of organizational leaders as I’ve held several roles across a half-dozen, unrelated organizations. I’ve seen what works and what doesn’t; who has driven me to perform, and who hasn’t; which organizations are the most prolific and which simply shuffle along. My learning to date has led me to the belief that some people view leadership and seniority as indicative of their superiority, and some view it as responsibility to lead (to be clear, this is for managers not all senior professionals – you can be senior and not really a manager (and in all of this there are grey lines)). In my experience as an employee, those who view a manager role as an entitlement, tend to view themselves as individual contributors, and those under them as their support team. They view themselves as the team and their personal success as the success for the team or organization. As a result, those working underneath them are often disheartened and face last-minute deadlines, constant schedule shifts, continual interruptions, micromanagement, long hours and energy deficiency.
On the flip side, the leaders I’ve seen and worked with that view their role as the leader of a team – almost a coach of sorts, tend of have the opposite effect. They see themselves as more of a keystone – sitting atop the organization in an important role, but as just a single element of it. I believe that they view their role as a responsibility to those working underneath them, and provide the support needed for those individuals to perform in their individual roles. These leaders and bosses view success as a team effort, derived from a group of individuals working diligently at their respective roles. The bosses I’ve had that skew towards this side of leadership have provided visibility in workflow (where available), driven me to learn and develop, and helped me and my colleagues to help them, leading to a highly productive team and a lot of energy and excitement.
Examples of enabling leaders extends far beyond the business world. Recently, while watching a Yankees/Red Sox game, former Red Sox manager (turned ESPN announcer) Terry Francona began talking about one of things he had learned from Joe Torre, the highly-successful 12-year Yankee skipper. I don’t recall the exact quote, but the sentiment stuck with me. Francona talked about the success Torre had during those 12 years (which included 4 world series rings), and how much of it was attributable to what he was able to do off-the-field – and just not off the field in strategy (although that was part of his success), but in the blocking and tackling of issues that kept his players free of distractions and enabled to just go out and play. Every team has off-the-field distractions – its inevitable when you have big contracts, big personalities and enormous pressure – but until his last few years, Torre’s Yankees had few real mid-season distractions, and Francona stated that much of that was due to Torre’s ability to manage these distractions, allowing his players to concentrate on baseball, and just go out and win games (including six pennants and a 3-year string of consecutive World Series titles). On the flip side, you don’t have to look much further than this year’s Red Sox team to see what off-the-field distractions can do to an organization (disclosure: I’m a yankee fan).
Of course, within the business world, examples of leaders as enablers is plentiful as well. Wiley Cerilli, who recently sold his two-year old company, SinglePlatform, to Constant Contact for $100M is known to be one of those leaders. Kenny Herman, a good friend of mine, and the EVP, Business Development at SinglePlatform had this to say about Wiley:
“Wiley often compared SinglePlatform to a football team; while every player can’t be the QB or star wide receiver, the kicker who can nail a 50 yarder, or a physical D-lineman unafraid to throw himself in front of the biggest guard each have an equal and significant impact on the outcome of the game. Often referred to as ‘team captain’, Wiley empowered each of our team members to maximize our strengths and outperform.”
And indeed (and to take the sports metaphors one step further), on the “about us” section of the SinglePlatform website they have a quote from Pat Riley – “Great teamwork is the only way we create breakthroughs that define our careers.”
Even Steve Jobs, someone that is often looked at as almost a dictator of his organization, said in a 1998 Fortune Article “Innovation has nothing to do with how many R&D dollars you have. When Apple came up with the Mac, IBM was spending at least 100 times more on R&D. It’s not about money. It’s about the people you have, how you’re led, and how much you get it.” Earlier that year, in a 1998 Businessweek Article, he also said “You’re missing it. This is not a one-man show. What’s reinvigorating this company is two things: One, there’s a lot of really talented people in this company who listened to the world tell them they were losers for a couple of years, and some of them were on the verge of starting to believe it themselves. But they’re not losers. What they didn’t have was a good set of coaches, a good plan. A good senior management team. But they have that now.”
In conclusion, one major takeaway lesson I have from my first decade in the workforce, is that those managers that have paved the way for me to perform (as well as learn and grow) have been the ones that I have wanted to do the best work for, and who I have performed the best for. In my short time as a manager, I’ve tried to upohold those same principals and I firmly believe that good leaders are good enablers – that leadership is “enableship.” What that means depends upon the organization, the level and the team – it could mean heavy involvement or light involvement, broad goals or detailed instructions, pressure or leniency, financial resources or other organizational support, vision or execution, optimism or blunt reality, or any combination of factors – but it nearly always means that you support your team with the resources they need to succeed. When they have those resources, they are empowered to succeed and with their success comes team success and organizational success. Geoffrey James has a great article in Inc that outlines the individual traits of inspiring leaders. Many of them line-up with the traits I’ve discussed here and its a valuable read and a good place to continue thinking about this topic.
A couple of weeks ago, Levovo CEO Yang Yuanqing made a somewhat unique move and divided his $3M bonus into 10,000 smaller bonuses and distributed it among the lower-level employees of the computer manufacturing giant. The distribution was obviously quite newsworthy, and has been covered extensively (most often cited ZDNet), so I won’t provide much more of the details here. What I’d like to focus on is the amazing lesson in leadership it provides to me. I had been gearing-up to write another post on leadership, but put that on hold until my next post – in favor of more current events.
Much of the news around the bonus distribution has been either neutral or focused on the generosity of the gift. While I in no way want to infer that the gift was not incredibly generous, or take anything away from that aspect of it, I also believe that there is an incredibly shrewd aspect to it as well. $3M is a lot of money, but Yuanqing made a total of $14M for the year, and so for him its not enormously impactful. For the recipients, who are all lower-level manufacturing and administrative workers, the bonus amounts to approximately one month’s salary – a nice bonus, and impactful, but not a game-changer. What I find most important about the distribution is the meaning behind it, and why it was earmarked. The bonus was given to Yuanqing specifically for the record profits and shipments generated under his watch. By giving it to his employees, particularly the lower-level ones, Yuanqing made it clear that the success was not his alone, but driven by all employees of the firm. The bonus means more than the money – its a significant recognition of a job well done, and of the employees’ impact on the organization – an enormous motivator.
It reminded me a lot of company we visited when I was at IGC. It was a tech company, although located outside of silicon valley, and in many ways beyond that (but likely tied to it), not your typical Silicon Valley startup. There was a tradition at the company that upon a significant achievement, employees were given a new rolex. While the bonus was a nice gift, that’s not what made it incredibly meaningful to the recipient – it was the recognition of a job well done and of one’s important role within the organization. I remember speaking with a few of the employees who had them, and each person wore theirs like a trophy. Those without watches wanted to be recognized in the same way, those with watches felt that they were already invaluable contributors to the organization and wanted to maintain that status, and overall everyone in the company felt that they were working towards a singular goal of making that company successful.
The common thread here is a tangible recognition of an important accomplishment and contribution to the company. Lower level employees don’t always get that sort of recognition and feedback and don’t always directly see the fruits of their labor. In fact, a friend from an investment bank recently shared a similar sentiment with me. This friend has been promoted on-schedule (maybe even ahead), and makes good money, but said he often feels like a cog in a machine – like an insignificant component, easily replaceable and having no impact on the overall success of the organization. Even though he’s at a large firm, I think that for lower-level employees across the board, its not hard to see how discouragement can set-in with situations like these.
Some companies combat that feeling with constant feedback – Phin Barnes (@phineasb) recently wrote a great post on “Honoring the Assist” and how recognition drives a great culture. Others do it in more one-off fashions like Lenovo and the company I mention above. Anyway its handled – monetarily or just with a pat-on-the-back, recognition is very important. Bonuses like the Lenovo one send a very strong, and motivating signal about one’s achievement and role within the company. Even though each lower-level employee at Lenovo was awarded the bonus – it had a lot of meaning behind it, due to where it came from, and what it was for. With a bonus like that, I’d be surprised if motivation didn’t propel Lenovo even further in the coming quarters, much like the employees at that tech company pushed even harder – it was a constant reminder of a job well done – and thats why I’ve titled this post “Yuanqing’s Shrewd Investment.” While generous, it was also incredibly smart. I wish him and the entire company continued success going forward.
Slow is Smooth and Smooth is Fast
This post is continued from part 1
The second collaboration-based driver of our increased output is that everyone is “on the same page.” By that, I mean that with all team-members involved in all major aspects of the process, it has been easy to quickly shift, pivot or iterate on design and functionality. In the old model, most work was done independently. Our meetings were shorter, but they were usually one-sided; as part of the team presented, and another absorbed. We’d then have to reconnect to discuss, answer questions, and re-absorb. Ultimately, the process was either drawn-out, or the outcome was misaligned with the intent (the more likely scenario). The picture below (courtesy of FailBlog) communicates this perfectly. It was like a big game of telephone – the product team specified one thing, the designers created another, and the engineers took it in their own direction.
see more epicfails
Another challenge we faced, beyond details being lost in translation, was that roadblocks couldn’t be fully anticipated. Each function within the team has its own strengths and weaknesses when it comes to understanding requirements, functionality and design/build challenges. While the product manager might design what he thinks is the most elegant solution to a customer need, there might be a few ways to actually go about solving the problem, each with its own design or engineering challenges. Working in silos, the specifications passed down from product to engineering are many times the most complex possible solution – often while a nearly as elegant solution can be built in far less time, and with far less complications. Working in a truly collaborative fashion lets you anticipate these potential challenges before development starts, and really before true specification begins. As a result, our outputs have matched our specifications, and the work we’ve completed has been done in less time than similar requirements sets had been done in the past.
I think the best way to sum this all up is with the “immortal” words of Modern Family’s Phil Dunphy (one of my favorite characters in one of my favorite shows, Modern Family – although all the characters are pretty awesome in that show). In the Season 2 premier, as the family is scrambling out of their formerly parked, and now drifting car, Phil is yelling “Slow is Smooth and Smooth is Fast . . . Slow is Smooth and Smooth is Fast.” While it was a pretty hilarious scene, there was some truth in it. When we move too quickly, details are missed and coordination (be it as in individual (perhaps even in a golf swing) or as a group) falls apart. Having each member of the team involved in process components like sprint planning, task decomp, user testing, etc. has certainly slowed those processes down. However, the time benefits gained from having everyone up-to-speed and in agreement has far outpaced any of the time lost in coordination. Small changes no longer require long discussions, mid-sprint meetings have gone more quickly because each participant was a participant in a prior meeting, and roadblocks or challenges have been anticipated.
As a result, post-planning, we’ve been able to work very smoothly and very quickly. As an example, see the changes from our original manager, to the latest release of that same manager (shown above – click to see it larger). The project took some time to complete, but in actuality, was likely done quite quickly for the complexity of the project, and with (what we believe) is an excellent final product. Our service, one that delivers english-language summaries of news from foreign-language sources, on companies and industries, through the web and through e-mail, can be a difficult one to clearly describe, even for those users working with our field sales team (and therefore have someone to walk-them through set-up). In order to deliver a valuable experience to our online users, who have only a brief, automated interaction with our product, we had to work with a very complex information architecture. Working collaboratively, with a cross-functional team allowed us to develop prototypes, rapidly iterate on those, user test, iterate further, release, iterate once again and eventually release a more robust solution. Working in this way, perhaps the hardest part was cutting ourselves off to move onto another project.
Over the past 6 months, and with projects such as the manager, its become so clear to me why small, cross-functional, collaborative teams are so powerful. Working together allows us to better anticipate challenges, iterate and pivot quickly, and generate ideas from a broad base of experience. For those organizations still thinking in a waterfall model, I’d suggest considering adding a bit of reorganization (both team-structure and even seating), in order to fold more collaboration into the process, and analyzing the results. For those at the top of the waterfall, it can be challenging to feel like they are giving up some control, and for those at the bottom, it can be a new experience to provide greater input. However, I truly believe that the output will be worth the effort. I personally would have a very hard time moving back towards a stage-gate model, and have very much been converted into a believer. Going forward, I hope to continue learning and growing within this environment.
For anyone reading, I’d love to hear your thoughts in the comments section below. I’ve surely not covered all of the arguments for small, collaborative teams, and I’m sure there are some to the contrary.
Coming from a background in Investment Banking and Venture Capital, teamwork has always been a critical component of my workflow. Of course, in the finance world, teamwork rarely involves working in cross-functional teams (in bigger banks that sometimes occurs, but less so in smaller firms). In the start-up world, you have many functions within an organization, and so working cross-functionally is a common occurrence. However, true teamwork, or collaboration, across those functions can vary by the organization. More traditional organizations tend to favor more of a “waterfall” approach, where work is handed down as it moves across functions – for example, in product development, workflow is handed down from discovery to concept to design to development. Others, often the more innovation-centric organizations, lean more towards true collaboration – where each member of the team is involved in major decisions and planning, and teammates work closely together throughout the work cycle.
When I first started as a product manager, our organization was tilted towards the waterfall approach. We operated in an agile development process (releasing after 2 week sprints), and we discussed development in sprint planning, but the workflow was separate. Product Managers handled discovery and requirements, sometimes handed down from the sales or executive teams, and then passed those to the engineering team, who divided up the work and began development. Occasionally the engineering team would refer back to product management for direction, but the work was done largely in silos. I didn’t know much else, and the process seemed to work, so I assumed it was the way to go.
At the end of last year, our organization brought in a new executive, with significant experience in collaborative environments, who pushed us towards that model. All of sudden, the Product Management and Engineering teams were merged into a single Product Development entity, planning was done as a group, and the team was organized by project and staffed cross-functionally – our production and the quality of our work skyrocketed. Below is an example of our old website, build in a water-flow process between our head of marketing and the engineers vs. our new website that was built by a small, cross-functional team consisting of a product manager (myself), a designer, a front-end developer, two back-end developers and our head of Product Development. Its just the front page, which doesn’t even do it real justice, but its a good example (if you click on it you can see a larger image).
This spike in output and quality got me to think deeply about why collaboration was so much more effective in the long run (particularly because it was slower and sometimes more painful in planning). While I’m certain there are many more drivers, I came up with two primary reasons for the productivity jump. The first is based on the Diversity of Ideas – the benefits of having multiple, varied perspectives and a wide-range of experience-sets at the table. The second rationale for our productivity boost was having everyone “on the same page.” With team members participating in the entire cycle of a project, each teammate was always up-to-speed, and could make quick decisions, pivots and changes. I’ll elaborate on the first idea now, and the second in a post next week.
Diversity of Ideas and the Power of Borrowing them from the Widest Base
There are several components of cross-functional collaboration that yield benefits, although as noted I mention only a few below. Top of mind is the benefit that comes from having a wide range of perspectives, experiences, and thought-patterns being in a single room. Often, in waterfall situations, its management or the Product Owner/Product Manager that develops the vast majority of ideas, and dictates what the result should ultimately look like. However, even though the product manager might have the most relevant experience, or be tasked with driving the product forward, it does not mean that they have a monopoly on good ideas. It also does not mean that they have only good ideas.
Gaining perspective from individuals with differentiated backgrounds not only provides a broader point from which to brainstorm but also generates the opportunity for true “out-of-the-box” thinking (an overused but I think appropriate term in this instance). The perspective of varied experiences allows each member of the team to adapt their thinking based upon insight gleaned from others experiences. Steven Berlin Johnson (@stevenbjohnson), the influential author and entrepreneur, talks a lot about this, in a term he borrows called “exaptation.” His latest book contains a chapter (and I recently saw a great talk he did on it) on this phenomenon, which is essentially the process of taking an existing process, technology or adaptation from one field and adapting it for another purpose. His classic examples include Johannes Gutenberg exapting the screw press, already in use as a wine-making device, for use in the printing press. This also leads into the impact of a “coffee-shop” culture in innovation – discussions with friends/colleagues/etc. outside of one’s normal “box” and learning about the tools and techniques they use in their lives and careers. With that added perspective, we have a much broader base of ideas to work with, and exapation can spark leaps in innovation rather than incremental advances. Indeed, in his book, Steven Johnson ties his exaptation discussion to Apple, a company that many consider to be one of the world’s leading innovators. Despite its insular culture, within its walls, Apple operates with a high-level of collaboration – in each step of the design, production and sales process, team-members from multiple disciplines are involved in planning, leading to a cross-pollination of ideas and ultimately cutting-edge technology and design.
From what I’ve experienced, this type of collaborative, cross-functional environment really does work. Its can be more difficult and painful in planning (as I’ll discuss further in the second half of this post), but the results can be extraordinary. Many of us know the power of a good brainstorming session, and one component of that is the differentiated perspectives brought by differentiated experiences and thought-processes. We can build upon the ideas of individuals, select the best, extrapolate new ideas, and ultimately wind-up in a far better position than when we started. We all think differently and our backgrounds help mold that. True cross-functional collaboration is like a great brainstorming session on steroids – ideas generated from different experiences, and thought patterns can help mold a good idea into a spectacular one.
For example, one of the projects were currently working on is a revised subscription experience for our users – making the options more flexible, better matching pricing to content received and making the entire process more easy for trailers that wish to subscribe. Unfortunately our service is quite complex (which I discuss in more detail below) and its not easy to develop an information architecture that easily describes the status/structure of one’s account. While I came up with an initial idea for our subscription experience, we’ve ultimately decided to scrap it in favor of ideas borrowed from the subscription experience of several tools that our designer and front-end developer use – services that I am aware of, but not enough to have experienced their subscription manager, price list, etc. Despite their original source, they fit with our product well, and we quickly adopted them after our team members were able to introduce the ideas in our planning sessions. Our end product will be further developed from those initial ideas, but they provided the base, and as a team we were able to move-forward and collaboratively determine how the end product would operate. Without cross-functional collaboration, our subscription manager would have never been as good as it will ultimately be (or at least as good as we hope it will be).
In part 2 of this post, I dig into the second half of the benefit of cross-functional planning – getting everyone in sync.