Trucking through SaaS:
Lessons for User Research
Historically, Uptake’s icon partnerships provided a committed, albeit limited, pool of users for the user research team to do research with. When Uptake decided to pivot to a SaaS approach, it ushered in a new set of challenges to practicing research within an industrial enterprise context. This case study illustrates how the user research team navigated those challenges to successfully engage with Uptake’s first SaaS industry, Fleet.
Adapt the user research practice to the new parameters of Uptake’s SaaS product approach.
New forms of collaboration between user research and other stakeholder teams;
Unanimous agreement facilitated by user research to focus the long-term product value in predictive maintenance;
Higher obesity and blood pressure in the fleet maintenance industry from donuts gifted in exchange for research.
Disclaimer: This case study does not represent the opinions of Uptake or its UX team.
The Back Story
When people ask me what Uptake does, I usually start off by asking them to stick with me while I explain it builds “predictive enterprise software for the industrial sector,” which usually results in some googly eyes and an open jaw I was trying to prevent. It sounds a lot fancier that it actually is, so let me unpack what that means. It may help to provide some context around the company, the team, and the industry before jumping into the case study.
Uptake is an industrial software company founded on the premise that data from machines (think: locomotives, aircrafts, wind turbines, dozers, etc.) has untapped potential. By capturing and analyzing this data, Uptake’s software can yield predictive insights to prevent failures, reduce costs, and increase productivity, reliability, and safety.
Uptake initially strategized to form partnerships with key industry icons and leverage the combined domain and technological expertise to make industry-specific software plays. Starting with Caterpillar, a $22.7 billion construction industry icon, it developed custom products with each partner that, once mature, would be released into the broader market. More recently, Uptake has pivoted to a SaaS product approach to pursue its transformation into a cross-industry insights platform.
The Users & User Research Team
User research at Uptake supports this work by cultivating an understanding of our users and their environment, tools, and processes for work. We surface these insights to internal teams in order to help drive the creation of products that meet user needs. Each researcher is assigned to one or more business lines and specializes in the domain knowledge of the relevant industries. We also often fill a relationship-building capacity with our users.
In contrast to consumers, our users are experts in their industries. They have decades of experience doing their jobs, and that makes their trust hard-won when it comes to any new technology that means relinquishing Excel or changing their processes. They are scarce in number, difficult to get access to, and the fact that their bosses are paying for the software can make them hesitant to provide honest feedback. However, building rapport with them often reveals them to be big softies at heart who love any chance to talk about their jobs.
The Fleet Industry & Fleet Team
Fleet is the trial industry in which Uptake piloted a SaaS product approach. The industry’s common denominator is the operation of commercial vehicles, including, most notably, trucks. While trucks are commonly known for hauling and freight transport, they can also be used for transit, service, and vocation, among other functions. This makes the industry incredibly diverse, and the ensuing variety of truck use affects the types of patterns seen across their life cycles and the support they need.
The Fleet team at Uptake is a cross-functional set of resources from across the company that have been allocated to its business line. For the sake of this case study, the key stakeholders to take note of aside from the user researchers are the business leads, who set the business line strategy; the sales leads, who make and manage the sales accounts; and the product manager, who defines the product direction.
The Case Study
Uptake’s vision has always been to become the most powerful cross-industry insights platform. Even back in 2015 when I had just joined the then year-old company as an intern, I remember listening to the CEO talk about the near-mystical powers of machine data and its infinite applications. Around the time Uptake turned three years old, it turned out to be a dream too powerful to be contained in industry siloes. While icon partnerships had helped Uptake get its name out, they had also placed certain data restrictions on Uptake that prevented it from taking full ownership of its technical expertise. The company’s leadership realized that it would have to part ways with its icon partner strategy if it were to have a chance at realizing its version. The answer was a SaaS product strategy, and it would be piloted in Fleet.
The pivot to SaaS made our job as user researchers significantly trickier. Icon partnerships had provided a fairly steady source of access to a limited number of users, and though the SaaS approach expanded that user pool, it also complicated things. While the sales team flaunted our software’s competitive advantage in being intuitive and easy to use, we had no idea who the users would be, whether they would be comparable to our current ones, and if our software would be usable for them. This was not simply solved by finding our own participants, as we were not allowed to do any research. Among other reasons, research often involved asking “dumb” questions that made it look like we did not know what we were doing, and pivoting to SaaS was supposed to be about owning our expertise.
The user research team for Fleet employed two strategies to stay relevant to the new context created by this pivot. The first was by adapting our research practices to new methods for collecting data and producing insights. Our scrappy attitude and willingness to collaborate with other teams ultimately made way for us to achieve a traditional cadence for user research. The second was by filling a gap at Fleet’s proverbial table that resolved the product-sales tension. This earned us respect from all sides of the team and helped key stakeholders remain aligned towards the same goal.
Part 1: A New Methods Toolkit
How user research rebranded discovery research and changed its internal sales pitch to shift its perception to a different light.
The user research engagement in Fleet began with a task broader than the business line that was its pilot: we were asked to define the users for the entire SaaS solution. The problem was, development had not started at this point, so there could not be any users yet. We tried to clarify this point and asked for permission to recruit participants to do research with, but having “user” in our job title still made us experts on users without having talked to any users. In spite of the recognition our team had received on its work in mature business lines and the demand for our work in emergent ones, the company did not understand what user research was and where it fit in. Primary research was misunderstood to be a kind of validation tool, and it was subsequently deprioritized for use after our product had been built.
Without being able to conduct any kind of discovery research, we made the most of our task, which became a kind of fiction writing. We abstracted information about users from other industries we had worked with and browsed job sites to find postings for titles similar to the ones of professionals we would have interviewed. Later, we relied on internal subject matter experts (SMEs) to help configure a hypothesis of user archetypes that would need to be validated. However, even with hopeful similarities to other industries, we had no clue how close or far we were from the truth.
Research Call Process
At the same time, the SMEs thought they had found an entry point into the Fleet market: the electronic logging device (ELD) mandate. As of December 2017, all fleet companies would be required to have a device that electronically recorded driver status. By selling a companion ELD app with Uptake’s SaaS product, the SMEs thought the product might gain better traction in the market. With the business leads’ buy-in, their plan was put into action. It put the company’s speed and sales abilities to the test with a 6 week scope to build and deploy the app and a bootstrap team pulled together from inside the company to start making cold call sales.
As the Fleet team moved forward with its plan, user research was at risk of being cut out. With the user archetypes defined, our job was seen as complete. Moreover, being a small, overallocated team whose work was perceived as a bottleneck to moving fast did not help our case for inclusion. In fighting against the assumption that anyone could do user research, we ourselves had come to be seen as overly possessive of our practice. Additionally, going out into the field and spending time with users took valuable time that the company did not want to sacrifice in speed.
Yet we recognized that the stakes were high in this guinea pig experiment. Not only were there risks in the assumptions inherent in our user archetypes, but being cut out now in the transition to SaaS had the potential to make or break the future of our practice at the company. Preferring some inclusion over total exclusion, we decided to make ourselves useful.
In order to support that effort and begin collecting validation research, our team suggested a sales call process based on discovery research practices. With the sales leads onboard, we rolled out a three-stage process that walked the sales team through basic user research. This partnership enabled both teams to be successful and meet their objectives; the bootstrap sales team felt empowered with scaffolding to conduct sales calls, and the research team was able to capture customer and product insights that might help target sales and inform product direction. Although this process required significant legwork on our end to process all of the feedback from the calls, synthesizing and sharing that information with the sales team at a frequent cadence made the sales leads feel like our work was a value add.
At the end of 3 months, the bootstrap sales team that had been pulled together for the initial sales push disbanded, and a newly minted senior sales team took their place. As seasoned veterans of the sales world, they were not as enthusiastic about a sales research call process that told them how to conduct their sales calls. However, they were still open to finding a different way of working together.
This led to a simplification of the research call process to an initial research call led by the sales representative with input from the user researcher. Our sales team found more receptiveness from cold contacts when leading with an informational rather than directly sales approach, and this also primed the call participant to being open to answering questions from user research about their work. Yet while this helped develop a new cadence in the sales and user research partnership, over time, the depth of research provided by these calls felt insufficient for our state of knowledge. We felt we would benefit from an independent space to ask follow-up questions. Broaching the subject to the sales leads, we agreed to establish what came to be known as the fleet advisory council, and we worked with the brand and marketing teams to bring it to life.
The vision for the fleet advisory council was a group of industry SMEs who could advise the Fleet team on its product direction and advocate for our product in the industry. We planned to recruit SMEs for the council from research call participants and incentivize them with membership on the council to do a follow-up call with user research. The advisory council was effective because it appealed to participants’ industry expertise and gave them a platform to provide feedback on an emerging market product (and not to mention, receive lots of Uptake swag). The council held the promise of what the icon partnerships used to be for us: a source of SMEs and users for us to conduct research with.
Unfortunately, the sales team also thought it created a future source of sales leads. This conflation between sales and research leads created tension in our partnership over the intention of the council. We did not want the sales team selling to participants who had agreed out of goodwill to answer our questions, but this made the sales team feel as though we were keeping potential sales leads from them. We discussed this friction between sales and research objectives, and the sales team eventually recognized the value of having resources that were off-limits to sales. Having this conversation helped us rebuild a foundation of transparency and trust that, along with organized recordkeeping, reestablished our partnership
Part 1 Conclusion
Adapting discovery research methods to our new context enabled user research to create new partnerships and change the perception of our practice and its relevance to SaaS.
Over time, scattered research opportunities through advisory council members and business lead connections transformed into more consistent research interviews and onsite visits emerging from a stable research pool. This provided us with the depth of research we needed to develop our initial findings into a real understanding of user needs and pain points that we could meet to create a valuable product for the fleet industry.
All in all, this research revealed that a major gap of our SaaS product was a lack of focus on the problem space of asset health in fleet.
Part 2: The Chicken-Egg Question
Do you sell products first and then build, or do you build products first and then sell? The chicken-egg question between sales and product and how user research found a happy medium.
The Sales Argument
“Sell the product as we build. We have to have money to continue, and we can always find ways to buy time to build.”
Uptake had gotten its start in predictive maintenance. Back in the day, it had proved itself out and won the deal with Caterpillar by demonstrating its ability to use machine data to generate insights about asset health (side note: asset is the Uptake word for machine). By combining machine data and historical maintenance records, its data scientists were able to predict and prevent future failures that might otherwise cause unexpected downtime.
In the pivot to SaaS, sales had taken precedent over predictive maintenance as a point of focus. With Fleet being its first undertaking, the company put a lot of stock into flexing its new sales muscles, including taking a risk on the SMEs’ plan of entry into the fleet market with the ELD mandate. From the start, this plan was at odds with a maintenance focus. By targeting drivers and their managers, it gave priority to operations and logistics, which oversaw how the fleet was used to accomplish business objectives but lacked the expertise and incentive to pull a truck off the road for non-routine maintenance.
Unfortunately, the ELD app did not end up being the panacea the business team had hoped for. While it had resulted in some sales, it had mostly been among small companies that had procrastinated becoming compliant with the mandate. Wanting to catch bigger fish (or “whales” as one sales lead put it) and struggling under the pressure the company had put on them, the sales team began to relay back all of the missing features that had prevented a sale and all of the additional features they had promised would be in the product in order to make a sale. These features filled the product roadmap to shape the future of our product towards a carbon copy of market competitors that grew further and further from Uptake’s core strength in predictive maintenance.
The Product Argument
“Build an informed product first. It’ll sell itself if we build the right product right.”
This put the Fleet product manager in a difficult position. The sales team was selling faster than he had development resources to build the product, and they were diverting data science resources to work on tasks with little relevance to the product. As a way to buy time for development, the sales team had started to offer pilot data assessments that would generate excitement among customers about what we could potentially do with more time. These data assessments potentially misled customer expectations about our product with a different focus on fuel optimization. They also created a barrier between product and data science. With the data scientists’ time tied up, there was little time to collaborate on how data science would be integrated with the product in order to surface the right insights to the right user at the right time, as opposed to in a vacuum.
In addition, the sales team’s evidence for feature priority was obscure and inconsistent as they kept selling new features that changed the product focus. At this point, user research lacked the depth of knowledge to weigh in on feature priority, so the product manager had only the sales team to rely on as a source of input. He also lacked organizational support from the business line leads, who prioritized speed and sales. Taking time to figure out what the right product was and following any pivots was precious time that could be used to get more sales.
The Happy Medium
Thus, in the time it took user research to find its cadence and conduct our first round of discovery research, active sales were at stake and the product was already half-built.
Consequences emerged from creating user archetypes that bypassed additional primary research. In the course of conducting field visits with advisory council members and business lead connections, our team uncovered a different set of stakeholders whose key needs were neglected by our SaaS solution’s features and functionality. Instead of four user archetypes, the typical fleet organizational pattern showed that there were instead three primary stakeholder groups who had different goals and motivations. This meant that while our product was technically usable, it was also a Frankenstein of features that did not yet support any single user, operations and maintenance stakeholders alike.
Despite the temptation to leverage this new information as proof that “I told you so”, we also saw the opportunity in our work to help the Fleet team find new direction. Beyond the correct order between selling and building, it was a crisis of identity for our product, which all stakeholders could rally around. Moreover, while we were inclined to side with the product manager’s perspective, we tried to approach the situation through a lens of empathy for the sales team under the pressure of company expectations and demands.
We facilitated a series of meetings with the product manager, sales leads, and business leads to negotiate Fleet team priorities. Our positive relationship with the sales team played a large role in occurrence of these meetings, as they were inclined to see the value of our perspective. However, our work elicited some mixed reactions. The product manager was relieved there was finally some field data to create an evidence-based feature priority and was ready to fight for what the users needed. The sales leads insisted they could not sell without the features on the roadmap. The business leads were concerned for the implications to sales and alarmed by the idea of having to pivot at this stage in our progress. We contended that the product needed to exist beyond an aggregation of features in order to create real value for the fleet industry, and it was more of a focus than a pivot.
After significant deliberation, we came to a consensus that differentiated between short-term and long-term visions for the Uptake product in Fleet. There were a certain set of MVP features that needed to be included in order for our product to be competitive among its market competitors, and these would stay on the product roadmap and be prioritized for completion by the end of that quarter. However, Uptake’s strength and, our research revealed, the need in Fleet was in predictive maintenance, an untapped and little understood market that our company had the potential to own. Starting in the next quarter, the features would be reprioritized according to what the solution needed have in order to support maintenance users’ work and provide real value to the problem space of asset health in the fleet industry.
With this consensus, all Fleet stakeholders committed to the same vision for the company’s pilot SaaS product.
Part 2 Conclusion
This was not a perfect happily ever after. In fact, it would continue to be an uphill battle as user research figured out exactly what a solution that supported maintenance users’ work looked like (which could be a separate case study in and of itself).
However, this was the first time and not the last time user research would use its work to align stakeholders across the Fleet team around the vision of a product that created true value for users and captured the market. In this ongoing work, we earned ourselves a new role that made us essential to the company’s SaaS play for Fleet and beyond.
User research rarely resembles its ideal, and this was an instance where its reality was messy. I wish I could tie a bow on this case study with a concise conclusion that provides resolution, but there is no real conclusion because the work is not done yet.
If nothing else, this case study is a call to finding new ways of doing research. It was up to our team to trial and error different approaches that would make us relevant to this pivotal moment at the company. By being flexible, creative, and open to pushing the bounds of our practice, we were able to navigate challenges like not having access to users and working with an unpredictable sales team.
In the end, this case study is not meant to be prescriptive, but rather to provide food for thought. Abiding by the true nature of research, I end with questions:
How might we cultivate knowledge without access to users?
How might we contribute to a team that is not ready for research, including misunderstanding and/or deprioritizing what we do?
How might we better convey assumptions vs. validation in our knowledge?
How might we become a source of expertise that others will seek out?
How might we communicate our insights more quickly and more frequently without compromising quality?
How might we increase the actionability of our insights?
How might we better demonstrate the impact of our work on adoption?
How might we change the perception of our team in the company?
How might we position ourselves with other teams?
How might we be creative and flexible without compromising the principles of our practice?
I encourage ideation on the questions on this list and the addition of others. It is this kind of thoughtfulness and curiosity that is formative to our practice in the present and will be imperative to its future.