In my experience, one of the things that has been the most beneficial to my career progress is to… not buy new Marketing tools.
“But Jeff,” you say, “tools are how things get done. It’s a key way that the Operations team can enable and support the rest of the Marketing org.”
And you’d be right, on all accounts.
However, I would estimate that for most teams evaluating, purchasing, implementing, and supporting new Marketing tools is one of the lowest ROI activities we can be involved in, and I’m not even talking about it from a financial perspective, although that is always a key consideration.
Built incorrectly, your tech stack can quickly end up resembling the island of misfit toys and will be frustrating and challenging to manage. There are tens of thousands of tools out there, and the more tools there are, the harder it is to tell them apart.
And the first step of building your stack is to understand the jobs your tools need to do.
What is the real investment you make in a new tool?
Your time is your most important asset, and committing to locking up weeks and months in an evaluation has a challengingly high opportunity cost.
Consider all of the different tasks involved in running a decent software evaluation and implementation:
Use case definition and documentation (1-2 weeks)
Vendor discovery and outreach (1-2 weeks)
Vendor evaluation (especially if each vendor insists on running you through the typical process of demo request → wait 3 days → talk to an SDR → wait 5 days → Discovery call → wait 5 days → Demo (and on and on and on - you get the picture) (1 month, 3? 5?)
Negotiations, including procurement/finance, legal, data security/privacy, etc. (6 weeks, if you’re lucky)
Final vendor selection, and the inevitable 5-7 days of disappointed emails and requests for win/loss interviews from the vendors that aren’t selected (1 week)
AE → CSM handoff and waiting for the initial onboarding call and tool access (3-5 days)
Initial onboarding call to first real onboarding call where you start making progress (5 days)
Completion of vendor onboarding
Completion of internal kickoff and enablement, UAT, and official “go live” (2-3 months)
At the end of it all, you’re into the project for at least 3-4 months with nothing to show for your efforts (yet).
This may seem like an exaggeration, but those of you who are seasoned software buyers know that these timelines are all too realistic, especially for any sizable ($30-$40k+) purchase.
This doesn’t even include a situation where a formal RFP is required and has to be constructed, and then you need to allow for adequate time for vendor responses and evaluation.
These timelines aren’t always the vendors’ fault, our own internal decision-making and review processes and red tape are significant contributors to this as well.
Accelerating through this process can seem like a good idea, but there are critical areas that are important to get right, or the chances of success for the rest of the project diminish significantly.
So what’s an Ops team to do in this day and age?
Not buy tools?
The Finance team would like that.
Until the pipeline dries up.
So what is the secret to a successful software evaluation? How can you have confidence that the decision you’re making is the right, well, let’s just say the “best” one possible?
What are your use cases?
I believe that it begins with the first step in the process - what I’ll call “use case evaluation”.
Software projects typically start in one of four ways:
“We don’t have this critical tool (marketing automation) in our tech stack, and we need to buy it”
“I’m tired of doing this manually/slowly, we should improve our process and do things faster.”
“This vendor keeps reaching out to me and I think what they do is cool, so we should talk to them” - aka vendor-driven evaluation
“Hey, wouldn’t it be cool if we could do (xyz)?” - aka Shiny Object Syndrome
It’s important to understand the jumping-off point for each new software project because each of these has major implications on how use cases are defined.
Options 1 and 2 - Net New Functionality, or Improvement of an Existing Process
For these options, you typically have a clearer path to your use case(s).
For option 1, you’re adding a significant amount of net new functionality to your stack. In the case of marketing automation, you can send emails, run nurture campaigns, store and manage data, host landing pages, collect data via forms, call webhooks, etc.
If these are things that your organization can’t do right now, then identifying how you will use each feature of the tool is a simple exercise, but well worth undertaking.
You’ll uncover nuances and details that may not have been apparent at the beginning of your evaluation process.
With option 2, you have the opportunity to improve a process that your team is already using. In some ways, this is the ideal case, because you have an existing use case to review and compare against, and you can easily define deficiencies and how to improve on them.
The last two avenues can be particularly tricky.
Option 3 - Vendor-driven evaluation
If you have a vendor telling you:
That you have a specific problem or need
Exactly what you should be looking for
What evaluation criteria should be used for looking at other tools
This isn’t a vendor evaluation, it’s a carefully planned ruse designed to offer you the illusion of choice while stacking the deck in favor of the vendor driving the process.
No vendor should have an outsized influence over your decision, outside of normal marketing and sales behavior.
I’ve had vendors offer me RFP templates, and rubrics for evaluating vendors, and even offer to help provide input on and evaluate other vendors.
That’s like letting the fox guard the hen house.
In conversations like this, you also can run into what is commonly defined as “opinionated software”. In short, opinionated software is a tool that defines the process of how something should be done, and then enables you to do it.
On the other end of the spectrum is a tool that offers you complete flexibility - the focus is more on functionality and usability, rather than “how” the work is to be managed and completed.
These conversations make it hard to do an objective evaluation of potential tools because you’re considering which tool aligns most closely with a preferred process or framework (in this case, provided by the vendor).
In this situation, even technically superior vendors will appear as less than a perfect fit. In the short run, you’ll end up with a tool that you think does exactly what you want, but in the long run, you’ll deal with the frustration of both a process that doesn’t fit your business needs and a tool that only supports that process.
Option 4 - Shiny Object Syndrome
The last approach is dangerous because it often doesn’t have a defined use case, or more commonly, the validity and priority of the use case are poorly defined.
This situation is ripe for those pesky Ops time-wasters, scope creep, and misaligned expectations.
Usually, this type of evaluation begins with a pipedream of a “perfect world” or nirvana-type situation and ends with an abortive effort to build something that isn’t needed for your organization.
For many teams, AI can fall into this bucket right now. As Operators, we’re geared to be forward-thinking technologists, and I think it’s common for us to perceive that there is value in AI, but what exactly that value is, and how to capture and replicate it at scale for our teams, remains a mystery in many cases.
How do you identify use cases?
Use case identification should always be the first step any time someone mentions anything about new tools.
I’ve found it helpful to ask the requesting team (or myself, if it’s a self-generated idea), to fill out a basic table, something like the following:
Pain Point - Ideal Solution - Expected Benefit
Lists like this help you understand the actual “jobs” that you’re considering hiring this tool to do. If you’re not familiar with this terminology, I’d recommend doing a deep dive into the “Jobs to Be Done” theory, an excellent framework for understanding how and why people use (or would use) certain products - which isn’t always what you’d think.
I’ve had experiences in my career where someone has asked for a specific tool, only to find out that the tool doesn’t do what they think it does, or want it to do. Getting very specific about what you’re trying to accomplish can save you from headaches down the road.
At the end of this process, you’ll have a specific set of jobs that you’d like the product to do, and you can objectively evaluate each of the tools under consideration specifically for how well they accomplish those tasks.
One thing to note, as you go through this process, look at it more as “finding something to do the job”, rather than embarking on a process to buy a new tool.
Keeping an open mind to how you can fulfill the needs of the requesting individual or team with tools and data already in your tech stack is an important part of this process. You may find that you can easily offer a suitable solution that doesn’t require any new tools.
What does this look like in practice?
There are a lot of different ways to tactically implement these concepts in a vendor evaluation.
One of the ones that I’ve found successful is to keep this information in a central location, like a Google Sheet, with a template that allows me to capture how each tool performs the job(s) that we need them to do, and then to evaluate them against each other.
I’ve provided a link to a template that you can copy and edit to use on your own here.
The “Tool Feature Comparisons” tab is designed to capture major and minor use cases and to help you score them as they align with what you’re looking for. This gives you a somewhat quantitative approach to something that can otherwise be difficult to measure.
This is accomplished by using a numerical scoring system, similar to the following:
Scoring Rubric:
- 5: perfect fit for use case with no modifications
- 4: good fit for use case, perhaps requiring slight modification
- 3: use case can be performed but with noticeable modifications or configuration
- 2: use case might be achieved with considerable customization, final status unknown
- 1: not a fit for use case
- 0: feature/functionality does not exist.
The spreadsheet is focused on allowing you to capture each of your specific use cases and then evaluate each vendor against them using a somewhat apples-to-apples comparison, which is surprisingly hard these days, even though every vendor seems to be quite similar to their competitors.
As you apply these concepts in your vendor evaluations, you’ll find that you’re:
Buying fewer tools
Buying the “right” tools
The tools you’re buying are worth the time and effort you’re spending on procuring them
Tool evaluation and purchasing is a skill, and it starts with knowing your buyer and what they (and you) are looking for. Understanding the complexities and realities of this process is a step in your career maturity.
Tool procurement is also a process and one that should be continually evaluated and improved. Understanding the use cases your team is trying to solve at the beginning of the process will have a beneficial downstream impact on the rest of your evaluation and purchase process, so it’s certainly worth the time to understand and improve.
this is so great! great post