Looking into the processes of research, exploration, and validation that go into making UX copy decisions.
As of 2021, UX Writing is one of the most sought after roles in UX design teams. The pros involve being at the interface of design, writing, and technology. Being able to guide customers through complex paths or offer simple help, suggestions, or implications that assist their decision making — all high impact aspects of a customer’s experience.
But how do we choose the right words and sentences to do this? Is there a process to getting to the right word, phrase, modal text, or error message? There must be, or UX writing would be little more than guesswork and luck.
When writing for an enterprise-level business with thousands of customers and tens of thousands of individual users, arriving at the right messaging through a process and documenting these content decisions becomes an essential part of creating a memorable experience at scale.
Once a product kickoff occurs, the research phase begins. This means reading about the problem we are trying to solve, validating the solution that is proposed from a design standpoint, and looking at customer support tickets to see how this impacts customers.
Understanding the terms that customers use to talk about the problem with businesses, each other, and how other companies name and market solutions brings a fresh perspective.
Great copy is a big part of helping customers succeed, and writing this copy calls for a process. Mine involves laying out every design component next to a tabular column and multiple tries at writing content for every element that is being added or reworked.
At present, we use Notion to document our iterations and decision-making process. Here’s an example of what I put together for the configuration settings for a feature called ‘Reason Codes’. These inputs help our customers track the reasons they are making changes to their customers’ subscriptions. My table in Notion usually comprises 5 columns.
Here’s where I call out the exact part of the feature or configuration that I am writing for. This pairs with the above image to help developers and product managers understand where the text goes easier.
Variations / Iterations
Where the magic happens. I’ll try different rewrites, anywhere between two and ten of a particular line, before I arrive at the one that works. As I iterate, I move to Figma and paste the copy where it goes in the design to test and see if it works in the space available. This part is often dependent on what stage designs are at. I try to keep my most recommended/latest version of the copy on top.
Status is just a way for me to colour code what I gauge as the overall readiness of a copy item. I try to keep it simple with a
- ‘Yay’ — It’s done
- ‘Ok’ — Almost there, but could use some work or needs some clarifications
- ‘Nay’ — Simply won’t do, there’s something I haven’t understood clearly, or perhaps we’ve decided not to go ahead with that particular configuration.
If anyone is reviewing my work, this also helps them understand where I’m struggling or stuck to share comments or suggestions.
This column doesn’t always get filled, as not every single iteration may require an explanation. However, this column is essential to explain why one iteration may work better than another. It also gives space to explain why older versions were eliminated, in relation to what the user needs to know or understand at this point in a flow.
Although self-explanatory, the questions column leaves space for progress when stuck and for better understanding and development of the solution. I use this space sometimes even to tag folks on my team or on the product management team who I am looking to for answers so they can clarify as I work on and test copy. This also creates a space for open discussions and prevents any categorical statements or assumptions.
The final step to making content decisions is consulting and collaborating with others to validate what you have arrived at. This is where conversations with fellow designers and Product Managers carry a lot of weight. Showing a PM options using the tabular column approach or by implementing some of the final options in your design helps them visualise the solution and view options side by side. This is a great time to jot down answers to any questions or blockers and iterate collaboratively.
A/B Test within the team
If you have a team that is enthusiastic and willing to share feedback, consider creating a poll on an app like Slack to get feedback on which iteration your team likes more (and maybe even ask them to comment and explain why). This is a simple and free way to begin basic A/B testing on copy within your application.
UAT (User Acceptance Testing)
Once designs are handed over, and development is done, we ideally do another round of copy analysis with a designer as part of what is called ‘UAT’ (user acceptance testing).
Here’s where you really get to experience your design and copy as a user of your app would. Sometimes copy elements that sounded fine while designing simply don’t anymore. Sometimes words or phrases sound repetitive. If you aren’t finding the experience smooth as a pretend-user, chances are your actual end-user probably won’t either. This may be the time to reach out to your developers and request small tweaks and changes.
This is also where it’s useful to look for the feeling we strive for at the crux of all design. What benefit is it bringing to customers? What problem are we solving for them? In product design, this has recently been called an ‘a-ha moment’.
When we built a Chargebee subscription creation workflow into our HubSpot integration, sales representatives found it great that they could create subscriptions without navigating away from HubSpot, thus saving them time while still getting familiar with Chargebee’s workflow.
This is the feeling I believe we need to shoot for every time we design a feature or enhancement. A moment that makes users go “Wow, this is useful! I didn’t even realize how much I needed it, but I love it!”.
A/B test within your app
Although a post-release exercise, there is no comparison to being able to plan A/B tests within your application to gauge engagement around two copy variations with actual live users.
A/B testing in applications, especially SaaS, is different from website and may depend on metrics like ‘time spent on page’ (the lesser the better) and ‘time taken to complete the task’ rather than direct conversion based testing. Consider proposing these options as part of any feature’s beta-testing or limited release experiments so you can use the data to validate the best option to take live for all customers.
To wrap it up
The advantages of following a multi-step content validation process are manifold. You have a repository of content iterations and reasoning to revisit whenever in doubt, you have pre-written content when writing for a similar feature or flow, and you highlight technical learnings and specifications for future usage.