It's been 4 weeks since I have started working as an Associate Product Manager at Simply Wall St in the Growth Team π. Our team's objective is to run various experiments across our entire product to improve and optimise the user experience.
It's been a huge learning curve π, as I've had to learn the ins and outs of not only Product, but also Growth.
I started this fortnightly newsletter with the intention of sharing my journey as a PM navigating the world of Product and Tech. Today's edition will share some of the key learnings I have acquired in the first month of being a PM.
I have broken these lessons down into a few key themes, which are as follows:
Product (Non-Technical) π¨βπ¨ - I will share some user research tips I have picked up, and give you a plan to approach messy, unstructured problems.
Product (Technical) π·ββοΈ- I will share some useful considerations when measuring the success of a product, and walk you through how to master the data as a PM.
Teams and Ways of Working π« - I will share some perspectives on how you can build a culture of learning within your team and organisation.
Product (Non-Technical)
Solving messy problems π€―
My team is currently working on how we can increase user activation across our product. For those that aren't aware, user activation refers to when a user experiences the core value of your product, also known as the 'aha moment'.
As an example, Facebook defines user activation as adding 7 friends in 10 days. This isn't just some arbitrary number, rather it's a point at which a user will be significantly more likely to become a retained user. In Facebook's example, the user's likelihood of returning back to Facebook increases dramatically when they have added more than 7 friends in 10 days.
Context aside, user activation can be a messy product problem. There are a number of reasons why users might not be activating, so getting to the root of this complex problem can be challenging. You can quickly become overwhelmed with the number of potential paths you can take when trying to improve user activation.
Before diving straight into the problem, it is helpful to establish a plan of attack. What kind of approach can you adopt when trying to solve messy problems like this?
This is the approach that worked well for me when trying to tackle this mammoth issue π§ββοΈ:
Start off by dumping everything you know about the problem in a diagraming tool like Figma, Miro etc. Ask yourself, what are some potential reasons why ___ ? In my case the question was: what are some potential reasons why users might not activate? This becomes your mental model of the problem (i.e., the way you see the problem).
Draw connections between any pieces of information you have. E.g., X could be a cause of Y, A can be a sub-issue of B.
Categorise the issues into some key branches / buckets, which become your overarching themes.
Validate your mental model by showing other people how you view the problem. Is there anything you may not have considered? Do they agree with the way you have structured the problem?
Quantify the themes you identified in step 3 (through user survey, interviews, data analysis etc. This should give you a good idea of which part of the experience you need to focus on optimising.
Once you have an idea of which issues / themes are more critical than others, form a hypothesis (i.e., what you believe to be true) and brainstorm some experiments that seek to disprove your hypothesis.
Run an experiment (e.g., A/B test), gather any new learnings and add tweak your mental model.
The above steps are iterative, as you will find that you keep leaning new things that challenges your initial mental model of the problem. Keep running these experiments, optimising and chipping away at the problem in small chunks. Big, complex problems like improving user activation are never problems that you will solve with a single solution or idea.
User research π¨βπ¬
I have only touched the extremities of the vast space that is user research. User research really epitomises the old saying "its not about knowing the answers, it's about asking the right questions".
The tricky part about user research is that there's an infinite number of questions you can ask users, but they're often limited to answering a few. The completion rates of surveys will usually drop dramatically if you bombard users with questions π€―. On the same token, you must be careful not to overwhelm users with too many questions during user interviews, as the quality of the conversation and resulting richness of insights will drop if you dilute the conversation with too many questions.
This is where I think many people go wrong with user research. They often ask users way too many questions and end up with shallow insights that don't move them in the right direction.
So how do you know if youβre asking the right questionsβ Think about the decision that you ultimately have to make first and work backwards from there to come up with the right questions to inform this decision. In addition, it helps to think about what you will do once you have the answer to your question π€. If you aren't planning to do anything with the information then maybe you're not asking the right question.
Putting all this together through an example, let's assume the decision you want to make is the following: decide if we need to redesign our user onboarding flow. One question which would help you make this decision might be the following: do users understand how to use our product after completing the onboarding flow? If the answer is no, then we can look into which areas of the product users are unfamiliar with to see if there are any gaps in our onboarding flow. In this example, the question is directly helping moving us towards the decision that we need to make and the answer isn't leading to a dead end, rather it is sparking further investigation, which will lead to more fruitful outcomes.
Product (Technical)
Measuring success π
When it comes to running experiments, accurately measuring success is a deciding factor between a good experiment and a great experiment.
Thinking about how you will measure success is just as important as thinking about what metric you will use to measure success.
As an example, you might have settled on using the amount of times a feature is used per customer in their first week of signing up to determine the success of a new feature launch. However, what you need to consider is how you will measure frequency of use. Will you use the average, median or some other measurement to gauge success?
Sometimes, the average can be skewed due to power users π¦ΈββοΈ who may end up using the feature 100 times compared to most users who might use the feature 10 times. In this case, if the frequency distribution of the data seems skewed, using the median can provide you with a more accurate picture.
Mastering the data π€
When it comes to the technical side of Product Management, you don't have to be a technical wiz, but it pays to understand the data. At the very least, you should understand your database schema. You should be familiar with the data that is being stored and how different tables link to each other.
If you are familiar with the data, it means you know where to look first and what questions to ask when trying to tackle problems.
Here's how I've approached understanding the data π’:
Reach out to a data analyst at your company, or a data/back-end engineer if you don't have a data analyst, and ask them to walk you through the data schema.
Understand what is stored in the main tables of your DB and how you can access the data if you need to.
Read up on any past documentation regarding experiments the company has run in the past. There will be a treasure trove of useful metrics to get your head around.
Get a list of data/reporting tools your team uses and familiarise yourself with them (e.g., Google Analytics, Metabase, Braze, Hotjar etc.).
Practice by trying to generate some of the data that is in these commonly used reports.
Teams and Ways of Working - Knowledge Sharing
Having a system in place to document key learnings π§ is critical for teams to avoid relearning and wasting resources. If your current team doesn't already have a process in place to capture and share learnings, I suggest addressing this gap as soon as possible. A simple process could be writing a confluence document after each project and distributing and sharing this document.
Sharing learnings - not just with your immediate team but across teams - is just as important as capturing and documenting them. A lot of teams might stop at capturing learnings, but sharing them ensures you don't operate in knowledge silos.
Our team has a weekly growth learnings meeting, which is helpful for sharing learnings with the team. We also have a growth slack channel, where we constantly share new growth learnings with our other product teams, engineering, marketing, customer success and operations.
Sharing learnings across teams is especially important when a product scales and has multiple verticals. In this case, you will likely have learnings (e.g., regarding customer behaviour) that can be applied from one area to another area of the product.
Thanks for Reading!
If you got to the end, here are some bonus resource that relates to todayβs discussion if youβre curious and want to read more, which I will include in every newsletter:
That's all for this edition of the Product Pill π . I hope you gained some valuable insights. Thanks for following me on this journey.
If you know someone else who would find this type of content useful, I would really appreciate if you shared it with them!
See you soon!