BY MIKE FAZIO
One of our recent posts outlined the need-to-knows about Pardot Sandboxes. In this one, we’re going to go a bit deeper on how to put this to work in your organization.
But first, a very important call out — Sandboxes are a core part of building cool things in Sales Cloud, but the current Pardot equivalent is slightly different. The biggest difference is a Pardot Sandbox doesn’t provide for passing data or metadata between production and the sandbox or vice versa (at least not yet).
And really, that’s OK. A lot of the functionality in Pardot can be safely and effectively tested in production.
I know, I know, this scares the living daylights out of most Salesforce admins who wouldn’t be caught dead doing that in Sales Cloud — but I promise it’s not as terrifying as it sounds. In this blog, I’ll cover:
- What to test in a Pardot Sandbox
- What’s okay to test in production
- Where more advanced testing MAY be required
Let’s dive in.
What to test in a Pardot Sandbox
The most important things to actively test in a Pardot Sandbox are the items related to the flow of data between Pardot and Salesforce — the things that could change records or cause data to be lost.
If you don’t have access to a Pardot Sandbox, you can request a Pardot Training Environment — and most of the same logic of what to test in Sandboxes applies in those as well.
1. Test the lead/contact/prospect sync between Pardot and Salesforce
The trigger for Pardot to sync prospect records to Salesforce is simple: as soon as it is assigned to a user, queue, or via active Salesforce assignment rule, a Pardot prospect will be linked with a lead or contact in Salesforce.
The process for determining what records sync from Salesforce to Pardot is more nuanced. The default syncing behavior between Salesforce and Pardot is determined by the connector user’s permissions and role. Any records that aren’t shared or “visible” to the connector user don’t sync into Pardot.
With the introduction of the V2 connector in early 2019 came another tool for Pardot Advanced Edition users: the ability to use Marketing Data Sharing. Marketing Data Sharing allows you to further refine your sync behavior by choosing one field on Leads, Contacts, Opportunities and custom objects to determine which records are synced to Pardot. (This is also what dictates syncing to multiple Business Units.)
For example, if you only want U.S. Leads synced to Pardot, you can create a custom field named Region and set the rule to be “If Region equals United States,” then sync the record.
There can be a lot of moving pieces to make sure that the records you want to share between the systems are correctly managed. Since this process could potentially involve making data changes to leads/contacts, this is a good candidate for Sandbox testing.
To test this in a Sandbox, I would recommend pausing the connector while you’re building. Once you’ve defined the business logic for what records sync from Salesforce to Pardot, then configure the profile, permissions, roles, and Marketing Data Sharing.
Run a few reports to get a list of records that you would expect to sync to Pardot and turn the connector sync back on. Give it a day to catch up and then compare the lists to identify anything that may need tweaking and troubleshooting.
2. Test sync behavior with large volumes of records
An issue that can come up in larger orgs with complex automation on the Salesforce side is hitting Salesforce governor limits and/or record and object access issues when large volumes of data are changed all at once.
I would recommend doing a test of large scale changes to see if you may need to rework any lead or contact triggers/automations in Salesforce. To test this, export all of your prospects, make an arbitrary change to a field that is synced to Salesforce on all of the records (i.e. add a period to a text field or something that won’t corrupt all the data), and then import them back into Pardot.
Pardot breaks up its changes into batches of 50 records at a time in order to minimize the load on Process Builders, workflow rules, triggers, etc… but every environment is different, and it’s ideal to know if there are issues to anticipate before moving to production.
If you do get errors, export the error list and look for clues as to what is causing it. Pardot will capture the error message that Salesforce sends for each failed record. In some cases those errors can be fairly common, standard errors (like too many SOQL queries) that you can usually drop into Google to find more info.
Or you’ll see a specific Process Builder named as part of the problem, which indicates where to look next. Sometimes the error is a custom error coming from Apex or a custom validation rule or field validation, in which case doing some sleuthing through your validation rules, field settings or Apex classes/triggers is required.
3. Test field mapping & field-level sync behavior
As part of the implementation process, you should be reviewing your Lead, Contact, Account, and Opportunity fields to determine which to bring over to Pardot. Any fields needed for personalization, segmentation, and training should be set up in Pardot.
With Lead and Contact fields, you can configure sync behavior one of three ways:
- Use Salesforce’s value as the master
- Use Pardot’s value as the master
- Use the most recently updated record
Accounts and Opportunity fields sync one way — from Salesforce to Pardot.
I would recommend focusing Sandbox testing efforts on Lead and Contact fields. You will want to compile a data dictionary outlining which Salesforce field names map to Pardot prospect fields to ensure the data that is passing back and forth is correct.
Review any synced fields that are required at the field level (vs. required on the page layout) or restricted picklists — these are frequent sources of sync errors. To test the sync behavior on these fields once they’re configured, create an import file and observe the behavior. Are the correct values overwritten? Is key data retained? Do you get any sync errors?
4. Test User Sync
When you enable Salesforce User Sync with Pardot, you will go through a process of mapping Salesforce profiles to Pardot roles. These roles (Sales, Sales Manager, Marketing, Administrator, or custom roles) dictate what your users will be able to see when logged into Salesforce or viewing Pardot data from the Lead or Contact record in Salesforce.
I would recommend configuring this in the Sandbox and verifying that this behaves as expected. Try using the “log in as” functionality in Salesforce to view Pardot information as different types of Salesforce users and confirm that the expected level of access is present.
What’s okay for Pardot Admins to test in Production
Unlike in a Salesforce instance, many of the changes you make in Pardot are pretty simple to delete without major repercussions, especially when first starting out.
Once you have live data in the Pardot production org, though, you will want to be careful of starting automations that change data or trigger communication to a large portion of your database.
These are the items that are low risk and should be built in production, along with some quick testing tips for each type of asset:
1. Email templates & drafts
QA email templates before sending by running a Litmus render to preview on devices/browsers and check that you’ll pass spam filters.
Then, when it’s good to go, get final approval by sending yourself a test email and/or by sending to a test list. Double check the subject line, pre-header text, images, content, and click every link.
QA these before publishing on your website by filling out in incognito mode and verifying that the completion actions fire as expected. Once embedded in a landing page, test using a few different browsers or even a cross-browser testing tool like Browserling.
3. Form handlers
Similar to forms, QA these by filling out in incognito mode and verifying that the completion actions fire as expected. Try “messing up” on required fields and verify that the expected behavior is taking place.
(Side note: In general, I would recommend making fields required on the front end form, and make nothing – except email address – required in the form handler in Pardot. This ensures that no submissions accepted by the front end form get “rejected” by Pardot and lost.)
4. Static lists
There’s not a whole lot to test here.
5. Dynamic lists
Set your filter criteria, then test your logic by clicking the “Preview” button:
This will then generate a preview that looks something like this:
6. Custom Redirects
To test, create your link and access it from an incognito browser. Verify that Pardot registers a link clink and you’re in business. You can also test it as a cookied visitor to make sure your completion actions fire as needed.
7. Folders, Tags, Naming Conventions, Campaigns, Files, Social Posts
Build right in Prod — these don’t really require testing.
8. Page Actions
After you add tracking code to your website, set up Page Actions where appropriate and then visit those pages as a cookied prospect to make completion actions are firing as expected.
9. Dynamic Content
Testing dynamic content depends on where the content is located. If it’s in an email, go through your normal email testing procedures. If it’s on a landing page, add it to the page before it’s live and visit the page as a cookied prospect. Then change that user’s profile to match a different variation of your dynamic content.
If you’re using dynamic content on your website, I recommend working with your web developer to get a staging site where you can test how the dynamic content will work within the website (check mobile, a few desktop sizes and various browsers).
10. Email Preference Centers & Unsubscribe Page
Similar to testing the other links in your email, you can (and should) test your EPCs and Unsubscribe Pages. Just make sure to re-subscribe your user after you click the “unsubscribe” button.
The scoring model in Pardot works retroactively when changes are made, so if you want to try out a certain model today and then revisit and make changes in a month, you can do that and the values will update automatically.
The one caveat here is that the changes you make directly to a prospect’s score (with completion actions or automation rules) are not retroactive, so if you clear a score, the only score you can get back is from the scoring rules.
In Pardot, grading is controlled by a series of automation rules and can be updated and changed as your business needs change. I would consider testing the automation rules that control grading just like other automation rules (see the next section).
As you can see, a lot of what you build in Pardot is self-contained and can be staged, tested and ‘deployed’ all from within your production Pardot instance. However, like I mentioned at the beginning of this section, there are a few things that require a bit more intentional testing to avoid issues in production.
Things to Consider More Advanced Testing For
In a perfect world, there would be a “push to production” button where we can build large Engagement Studio Programs and complex Automation Rules in a Pardot Sandbox and then move them to our live org.
But we don’t have that yet — although according to Pardot, it’s coming. There are some assets in Pardot that you may be tempted to build first in a Sandbox, but where doing so would require the team to spend hours manually rebuilding.
The following items would require a lot of time and effort to build in a Sandbox and manually rebuild. Admins need to weigh the risk/reward of building these types of things in Production directly, or first in a Sandbox (or a mix depending on the complexity). Keep in mind that rebuilding a second time in a new system opens the door to additional mistakes and human error.
1. Automation rules
Automation rules are powerful — and with great power, comes great responsibility.
The hardest part about automation rules is getting the logic correct. So, one idea is to build your logic for the ‘criteria’ first as dynamic lists in Production first. Then you can let these dynamic lists run for days and inspect the membership frequently. If the criteria is pulling in the prospects that you would expect it to pull in, it’s pretty safe to then use that same logic in the criteria for your automation rule and build live in Production.
As a final test, preview the automation rule before turning it on (and take one more look at the actions section to make sure it’s what you want to happen). Yes, this sounds like a royal pain in the keyboard, but automation rules can make massive changes to your data permanently so we want to really look hard at the logic of the criteria.
2. Engagement Studio
Engagement Studio Programs have the widest range of functionality and can edit data, send emails, create tasks and campaign members in Salesforce, assign prospects, and send notifications. So, we need to be intentional with how we build and test our programs.
A few ideas: First, keep it simple. Branching logic is good but can quickly get out of hand, if you need to, split up the logic into multiple, more focused programs.
Second, double check your dates and your logic gates.
Third, create test prospects and add them to the program first. Make sure they meet all of your rules so you can watch how they flow through the system. Depending on your program, you might have to shorten your wait times so you can test in a reasonable amount of time, but keep the program as true to the final as possible.
3. Lead assignment rules/processes
If you’re handling lead assignment in Pardot, you will most likely be building it using completion actions, automation rules, and/or Engagement Studio programs. As such, I would test these using the other suggestions we have for those features. The one difference is that it might be worth creating a few test prospects and running them down various paths to watch how they are assigned.
4. 3rd party integrations
There is a wide range of features and functions that outside integrations have and, as such, need to be handled on a case-by-case basis based on the integration.
Some integrations are simple and won’t directly affect Salesforce data, like the Google Analytics and Adwords connectors. But others, like webinar connectors, can make changes to Prospect data — so you’ll want to be sure to test those integrations using test Prospects to watch the data flow to see how and when data is changed. The Sandbox uses a different URL (pi.demo.pardot.com) which might not be supported by the 3rd party integration, so testing compatibility with the Sandbox may vary.
Testing & Change Management in Pardot
What questions do you have about best practices for testing Pardot assets and automation? What barriers have you run into? Any questions about Pardot Sandboxes?
Let’s hear it in the comments!