Experimenting your way to AI success
Don't wait for the "perfect fit" to be provided by a vendor
When it comes to “real” AI use cases - the ones you would actually work into your daily routine - waiting to see what vendors introduce may be an exercise in futility.
Existing organizations layering AI into their current tools are bolting on a temporary value-add. It’s unlikely that there will be any significant leaps forward in forcing today’s generative-first AI capabilities into tools that were conceptualized, created, and existed without them.
There is no playbook for adding AI to an existing tool. FOMO has pushed product and GTM teams to find any way possible to generate AI functionality in an attempt to stay relevant in the market.
Most of what we’re seeing in terms of “artificial intelligence” still is just barely scraping the surface and likely isn’t worth your time.
The next (first) generation of artificial intelligence innovations for Marketing teams and Marketing Operations will come from those who are tinkering, experimenting, and trying things regularly.
So what does this mean for learning more and trying new things with AI?
I think that spending some time finding how to implement the most accessible and moldable AI functionality today - GPT models - into your daily work.
I recently undertook an exercise using a classic Marketing Ops use case that almost every team supports frequently - cleaning up and routing a post-event spreadsheet.
With the recent release of ChatGPT’s newest model (4o), it’s becoming increasingly realistic to ask the tool to execute an entire workflow of tasks on your behalf.
In this exercise, I asked GPT to follow through on the following common activities needed for post-event follow-up:
Cleaning and normalizing the data in the spreadsheet
Validating email addresses
Lead score generated (based on ICP fit and activity type)
Routing to the correct SDR
I generated a “monster prompt” that included specific instructions for each of these steps - I’ve referenced it below if you’d like to review it. If you have already documented your process, you’re 99% of the way there.
I’ve also included a link to the mock data spreadsheet I was using to test this process. It includes a sheet of data similar to what your organization might collect at an in-person event, as well as a tab that includes region-based SDR assignments for states in the US.
In the workflow, ChatGPT can reference different tabs within the spreadsheet, allowing for complex workflows like routing to be accomplished.
I’ve also included a process in the workflow that has GPT generate a tab in the spreadsheet tracking each of the changes it makes, and why. This is a critical part of the workflow, as it’s important to understand the changes being made in the spreadsheet and why the model decided to make them - it helps you to fine-tune the prompt moving forward.
On a side note - if you want to know some of what it’s like to be a manager, communicating and working with a GPT model is a great way to understand the level of detail and amount of work that goes into working with other individuals on your team. It’s not a true 1:1 comparison, the people you hire on your team will have tremendous personalities, be critical thinkers, and have a desire to improve and learn.
The Details
Prompt example:
I have a spreadsheet of people who registered for an event. It is included with this message. The spreadsheet contains information about the individuals who registered as well as their companies. It also contains information about whether or not they ended up attending the event.
I need to have this data cleaned up, normalized, and checked for accuracy. I also need to have each of the individuals scored with two separate scores - one based on their fit relative to our ideal customer profile, and one based on whether or not they attended the event. Lastly, I need to assign each of the registrants to an SDR based on what we know about their geographic location.
At the end of the exercise, I will need to have a spreadsheet with four tabs generated:
A final version of the initial sheet with the cleaned, normalized, scored, and routed data.
A sheet that details each of the changes you made, with the following columns:
Original row number
Original value
New value
A small note with a reason for the change
List of potential duplicates
Determine duplicate records by looking at the email address
List of registrants from the same company
Determine registrants from the same company using email address domain, company name and website
Here are the details you need for each of these tasks:
Task 1 - Data cleanup and normalization
Ensure that each value in each of the columns is consistent with the intended data in the rest of the column
If you find a value that does not fit in a certain column, evaluate the other columns to see if there is a better fit. Move the data to that
I have specific instructions for some of the columns:
For the first_name and last_name columns, normalize the capitalization for all values that are all lowercase or all uppercase to Proper case
Check the email column to ensure that the email addresses are properly formatted
Check that the values in the phone number column are formatted correctly for phone numbers in the United States
Normalize the values in the title column to use the full version of titles. For example, change the values “VP” or “V.P.” to “Vice President”
For the Company name column, normalize the capitalization for all values that are all lowercase or all uppercase to Proper case
If the Website column is blank and the email address value exists, derive the website from the email address domain
For the City column, try to populate blanks by referring to the data in the phone column (area code) or postal code columns
For the State column, if the city column is populated, try to derive the State value from city. If the city column is not populated, check to see if the phone column and derive the city from the area code. If the phone column is not populated, try to derive the value from the postal code column.
For the postal code column, attempt to derive the value for blank rows from the phone column using the area code, or the city column
Task 2 - Scoring
A unique score should be generated for each of these categories, between 1-10, with 1 being the lowest and 10 being the highest. Add the score for each sub-category to the final spreadsheet, and then total them together for a final aggregated score.
Scoring will be based on three factors:
Job title
Our company sells to Marketing and Sales teams. Our ideal contacts are at the Director and VP level.
C-level contacts are the second highest, Managers and Sr. Managers the third
Non Sales and Marketing contacts should be given a zero score
Company size (based on number of employees)
We typically sell better to companies with more than 2000 employees - score these companies the highest
Score any companies below 200 employees or with a blank value with a 0
Engagement
If someone attended the webinar, score them a 10, otherwise, score them a 0
Task 3 - Routing
Refer to the SDR State Assignments tab for this task.
If a company has more than 2000 employees, it should be routed to an Enterprise SDR.
If a company has less than 2000 employees, or the employee count is unknown, it should be routed to a Corporate SDR.
Once you have determined whether the lead should be assigned to an Enterprise or Corporate SDR, use the state value to look up which SDR they should be assigned to. If the state does not exist, assign the lead to “System User”.
Unpack the prompt
As you can see, the prompt is highly detailed and explains specifically what values need to be referenced in each situation, how things should be changed or updated, and what information should be returned.
While this seems self-explanatory, knowing this information from the outset of the task gives you an advantage. You can give the model the instructions needed to complete the task almost entirely independently. This is where having pre-existing documentation comes in very handy.
The prompt is most useful when it comes to analyzing and taking action on unstructured, incomplete, or inconsistent data. Because you can replicate highly complex logic in natural language, you aren’t limited by technical knowledge or expertise with a specific coding language or technology.
Wrap Up
In my experience, I’ve found that cleaning up a post-event spreadsheet can take anywhere from 30-90 minutes, depending on the cleanliness of the data and the different follow-up tasks that need to be completed.
Generating and testing this thorough prompt took me about 90 minutes. Depending on the number of events you’re supporting and the other things on your plate, that may be a sensible investment of time.
I’ve also been testing other use cases - recently I exported the JSON code of a technical workflow and gave GPT specific instructions on how to read and analyze the workflow to generate a detailed overview and documentation. (Not needing to manually generate documentation? A win in my book!)
In the future, I could imagine a world where your GPT model of choice might be enabled to take action in specific tools (like Marketo, Salesforce, Outreach, etc.) on your behalf, completing the entire post-event process for you. Currently, this process only cleans the spreadsheet and provides the final data for you to action in the various systems.
Regardless of how things continue to play out, it’s important to continue experimenting on your own with multi-purpose tools. Don’t rely on a vendor to show you how AI can be a benefit to you and your team - try things on your own, find what works, and share it with others!