UI/UX, Product design

Building an AI Conversation Design Tool

We created a tool that allows recruiter admins to give conversational guidelines to an artificial intelligence that talks to job candidates

12 minutes

Building an AI Conversation Design ToolBuilding an AI Conversation Design ToolBuilding an AI Conversation Design Tool

My role

For 18 months and 3 project iterations, I handled all the UI and visual aspects and heavily contributed to the UX development of this project. I conducted user research internally and with Mya's customers, consulted our solutions architects, and worked closely with the PM, engineers, and our VP of Engineering to define the user experience while simultaneously presenting ideas internally and externally using wirefames and interactive high fidelity prototypes.

Client

Mya Systems

Tools

Sketch, Invision, Balsamiq

Collaborators

Ben Rohrs

Project Manager

Helge Scheil

VP of Engineering

Jerilyn Sambrooke

Project Manager

Mihail Gumennii

Frontend Engineer

Shalom Aptekar

Frontend Engineer

Lisa Schiller

Solutions Architect

Laura Maddox

Backend Engineer

Simplified, Mya is a chatbot. You know that annoying help chat window that sometimes pops up in the bottom right of websites? That’s not us. Similar, but Mya is better. You’ll see why.

Recruiters can use Mya to help with both inbound and outbound recruiting. Mya can talk to candidates via text or chat widget to gather job-related information, answer questions, keep potential candidate profiles up to date, and do a variety of other things. Behind the scenes, things are quite complex and Mya Systems has spent years considering the ethical and technical aspects of creating a robust AI platform.

Conversations can have many different outcomes, good or bad, so our highest priority was ensuring Mya had quality conversations with each individual it engaged with.

The AI is well crafted by people much smarter than me and tons of data models. It is constantly learning and adapting, but for Mya to have a purposeful conversation with someone (e.g. prescreening a candidate for a job), it needs a Conversation Blueprint. A Blueprint typically contains a set of questions, defines conversation boundaries, and gives the AI the necessary information to take a job candidate through the first steps of the recruitment process. Mya then uses that Blueprint to simultaneously talk to any number of candidates about that job. Initially, the Mya team would spend several days going through an extensive process to develop each unique Conversation Blueprint (hundreds of them), but as the company scaled and more customers demanded custom Blueprints, that was no longer sustainable. The entire product relied on quality Blueprints so we needed to address this immediately. We posed the question:

What if we let customers create their own Conversation Blueprints with our derived best practices baked into the creation process?

Before I break this down, here is a sneak peek of the final solution 😉

To kickstart the project, the first thing I wanted to do was delve into how we were currently creating Conversation Blueprints. I quickly learned that I knew nothing about anything. Thankfully our conversation design team was patient and explained their process to me.

A Blueprint is made up of multiple Nodes. Nodes are essentially written questions pre-configured with response handling (instructions on how to handle candidates’ various responses). The precise wording is determined by our linguists. Every Node across all of our customers lives within the Node Library.

Here’s a simple example: You are a recruiter and want to hire a truck driver with Mya’s help. Mya is given a Blueprint that contains a single Node that contains the question, “Do you have a valid driver’s license?” You, the recruiter, determine how the candidate’s answer will be weighted towards their job eligibility, whether it’s a hard requirement or just a preference. If a candidate doesn’t meet a hard requirement, they will be disqualified from the job and notified why. Let’s say a candidate responds with “No, but I can get one.” Mya will not disqualify the candidate but will flag the response for you to look at later, and then Mya moves on with the conversation.

  • Job-to-Blueprint Mapping: For each job, there was a 1-to-1 mapping with a Blueprint. So for every new job opening or outbound initiative (Campaign), a new Blueprint had to be made. Blueprints could be duplicated, but they were not reusable.
  • One Node to rule them all? Not quite. Nodes were not dynamic and the Node Library was crowded with multiple variations of the same questions. You would find 3 separate Nodes like “Do you have at least __ year(s) of experience in this field?”, “How much experience do you have in this field?”, and, “We’re looking for people with at least __ year(s) of experience in this field, do you meet this requirement?“ A new Node had to be created for every new question, even similar questions that needed to be worded differently.
  • Conversations can break: The order of Nodes or even having too many Nodes in a conversation (seven is usually a safe maximum), would affect conversations’ probability of candidate drop off or the conversation “breaking.” Imagine playing 20 questions with a chatbot, except every question is determining how unqualified you are for a job.
  • Node compatibility: The role type (full-time, part-time, salary, hourly, etc.) of the job dictates what Nodes are compatible with the Blueprint. You wouldn’t ask someone interviewing for a full-time salary role what their desired hourly pay is. That would be weird.

All of these discoveries challenged my initial assumptions and would inform my UX decisions.

After discussions with our customers and probably too many internal meetings, we set the course of action to build the Conversation Design Tool (CDT) within our already existing recruiter portal. My first self-assigned action item was to think. More specifically, to think about creating Conversation Blueprints from the perspective of recruiter admins, our expected users.

Some admins were very involved in the conversation design process (the power users) but we expected many to be new to it and AI in general (the noobs). We could assume all users would have adequate knowledge of all Mya use cases due to our customer onboarding process, but we did not expect anyone to have the same training and knowledge as our expert conversation designers. The next question was: how can we transform this complex process into an experience that doesn’t give anyone anxiety?

Knowing all this, we wanted to build something that adopted our tried and true techniques but catered to the recruiter admin’s experience.

Without detailing every step of the first two versions, let me give an overview of what we tried to achieve.

For both iterations, we came up with a multi-step Blueprint creation process. Users could, for the most part, select their desired Nodes (questions) from an expansive list, configure them with the relevant information via a generated form, make comments/requests to the Mya team, and live test the conversation with Mya. The result was a self-made Conversation Blueprint. The full sequences of CDT 1.0 (left) and the improved CDT 1.5 (right) are shown below.

CDT 1.0 never made it into the hands of customers, and CDT 1.5 only made it to our biggest customer, but it was tested with several customers. When we launched CDT 1.5 with our first customer, we watched intently and evaluated Mya’s performance with their self-made Blueprints for 3 months. The Blueprints performed up to standard. It seemed like a success, but we were aware of two things. First, the Mya team was still often providing help in the process (mostly custom requests and questions related to the tool). Second, the user group from this customer was made up entirely of power users. All of them were already comfortable with conversation design in general and had had plenty of time looking at Nodes prior to CDT, so they were quickly able to use the tool effectively. The newer user testers had a harder time and had to deal with some challenges:

  • They were overwhelmed by how many Nodes (questions) they could choose from. They had never had to select pre-written questions for an interview and were accustomed to formulating and writing their own questions. Many just didn’t know where to start when it came to selecting Nodes.
  • It was hard for them to conceptualize how the conversation would flow from start to finish. There were frustrations around not knowing the exact wording of questions when forming the conversation structure.
  • There were still limitations relative to the original hands-on conversation design process. This was noted by all our users.

A major goal of the Conversation Design Tool was to reduce the amount of work on both ends and allow customers to roll out Blueprints faster. CDT 1.5 partially met that goal. It cut down the amount of involvement needed from the Mya team, but there was still a lot of back and forth communication and non-automated processes that cost time. Additionally, the CDT was expected to make Mya, as a complete product, more accessible to a wider variety of customers. What we currently had, really only catered to the customer with the longest history with Mya.

CDT 1.5 didn’t quite make the cut. It was somewhat usable, but still not what we had hoped for.

Our findings from the first two iterations gave us a solid basis for the third and final iteration. The leadership team wanted to invest more engineers and resources into the project. As a result, we no longer had to cut corners and could prioritize a user-friendly experience and features that would, in the long run, optimize time and scalability.

Many months and discussions later, we launched into a third effort. At this point, I had established a consistent look and style for CDT, and most design components had already been developed. This allowed me to do the majority of prototyping with mid-fidelity wireframes in Balsamiq. There were several improvements that we prioritized:

1) Custom Questions and Branching: Allowing users to make custom questions, even if they are just simple yes/no or multiple-choice, was estimated to reduce the need for one-off Node requests by about 75%. We even allowed for simple branching (follow-up questions).

2) Reduce Blueprint Volume: Since Blueprints had a 1-to-1 mapping with job listings and outbound initiatives, tons of Blueprints were needed. We found a way to allow Blueprints to be attached to multiple jobs or reused across multiple Campaigns. This meant redesigning dozens of Nodes so they were capable of pulling in dynamic information. Here's a simple example: take a Node like "Are you at least ___ years old?" The age could be derived from whatever job is attached to the Blueprint and saved a user from having to make several versions of the same Blueprint but with different ages filled in. While designing a conversation, users could determine which fields would be dynamic (pulled from a job) and which fields were non-configurable.

3) Blueprint Assignment Management: With the introduction of dynamic information, users needed a way to configure that information when attaching multiple jobs to a single Blueprint.

4) Blueprint Templates: Blueprints could be created as templates and used as a starting build point for future conversation designs. In CDT 1.0 and 1.5, we gave users pre-made templates but never allowed them to create their own.

5) FAQs: A Blueprint typically contains a set of answers to FAQs that candidates may ask. There are global FAQs that pertain to the company as a whole, and job-specific FAQs that need to be configured on a per Blueprint basis. That job fell to the Mya team, but the process was very simple. Users should be able to get away with just creating several reusable sets of FAQs and light configurations on occasion.

6) Usability: To make the process of selecting and configuring questions less intimidating and more cohesive, I completely reimagined the screen where users design the conversation. Previously I had split the process into three steps: select all the questions, go to the next screen, input the relevant information, go to the next screen, and then finally review the actual wording. This new experience allowed users to do all three things, plus much more, on the same screen.

I had received user feedback that navigating and selecting questions from the Question Library (a.k.a. the list of available Nodes) was overwhelming. Rather than surfacing every available Node, I had users type keywords into a smart search. The most relevant questions would surface and they could view the exact wording and select from there. If they couldn't find a suitable question, they then have the option to create and configure a custom question. The smart search worked well since users already had a general idea of what they wanted to ask, they just needed to type in a few words. For power users, who knew the Question Library well, I included a "View all questions" button that would take them to a screen where they could mass select their desired questions.

So, hopefully, this video makes a little more sense now  🤓

Although the CDT took a while (18 months) to get to a good place, we were quite pleased with where we landed. When Mya was acquired, the feature had been rolled out to the top 3 customers accounting for 90% of revenue and several mid-size customers. It had been receiving positive user feedback and frequent usage. We were able to almost entirely off-board the conversation design team from the support teams for those customers. The success of the tool inspired us to continue developing more self-service features and software integrations and ultimately pushed the company towards being a stand-alone product software, rather than a service.

This was my very first start-to-finish practical UI/UX project, so I certainly learned a lot.

1) Consider ALL the users: It was easy for me to look at our power users and design a tool that would help them quickly get the job done. It took me some time to step into the perspective of someone who was used to recruiting in an entirely different way and had little to no familiarity with the Mya product or AI in general. Creating a less abstract experience let users feel like they were doing what they were good at, just in a different way.

2) Do the extra work: Often when time felt like it was against me, I would cut corners and ignore steps in the design process just so I could present SOMETHING visual in meetings. I now realize that presenting information and findings is equally important, even if it's just words on a slide. I wish I would have taken extra time to do more research, consult more resources, and been intentional about thinking through designs before moving pixels around.

3) Push: Ask for more resources, ask for more support, push the project scope, and present those "moonshot" ideas. I understand getting an MVP out is difficult. And sometimes you just want to get something in front of your users. But I wish I would have pushed my initial ideas more. Even if they felt impossible, they were not impractical. And I found that the majority of my unvoiced ideas in the beginning, eventually ended up getting implemented. We just had to take the long way there.

Building an AI Conversation Design Tool

UI/UX, Product design

Building an AI Conversation Design Tool

We created a tool that allows recruiter admins to give conversational guidelines to an artificial intelligence that talks to job candidates

Client

Mya Systems

Tools

Sketch, Invision, Balsamiq

Collaborators

Ben Rohrs

Project Manager

Helge Scheil

VP of Engineering

Jerilyn Sambrooke

Project Manager

Mihail Gumennii

Frontend Engineer

Shalom Aptekar

Frontend Engineer

Lisa Schiller

Solutions Architect

Laura Maddox

Backend Engineer

My role

For 18 months and 3 project iterations, I handled all the UI and visual aspects and heavily contributed to the UX development of this project. I conducted user research internally and with Mya's customers, consulted our solutions architects, and worked closely with the PM, engineers, and our VP of Engineering to define the user experience while simultaneously presenting ideas internally and externally using wirefames and interactive high fidelity prototypes.

Go to project
Building an AI Conversation Design Tool

UI/UX, Product design

Building an AI Conversation Design Tool

We created a tool that allows recruiter admins to give conversational guidelines to an artificial intelligence that talks to job candidates

12 minutes

My role

For 18 months and 3 project iterations, I handled all the UI and visual aspects and heavily contributed to the UX development of this project. I conducted user research internally and with Mya's customers, consulted our solutions architects, and worked closely with the PM, engineers, and our VP of Engineering to define the user experience while simultaneously presenting ideas internally and externally using wirefames and interactive high fidelity prototypes.

Tools

Sketch, Invision, Balsamiq

Collaborators

Ben Rohrs

Project Manager

Helge Scheil

VP of Engineering

Jerilyn Sambrooke

Project Manager

Mihail Gumennii

Frontend Engineer

Shalom Aptekar

Frontend Engineer

Lisa Schiller

Solutions Architect

Laura Maddox

Backend Engineer

Simplified, Mya is a chatbot. You know that annoying help chat window that sometimes pops up in the bottom right of websites? That’s not us. Similar, but Mya is better. You’ll see why.

Recruiters can use Mya to help with both inbound and outbound recruiting. Mya can talk to candidates via text or chat widget to gather job-related information, answer questions, keep potential candidate profiles up to date, and do a variety of other things. Behind the scenes, things are quite complex and Mya Systems has spent years considering the ethical and technical aspects of creating a robust AI platform.

Conversations can have many different outcomes, good or bad, so our highest priority was ensuring Mya had quality conversations with each individual it engaged with.

The AI is well crafted by people much smarter than me and tons of data models. It is constantly learning and adapting, but for Mya to have a purposeful conversation with someone (e.g. prescreening a candidate for a job), it needs a Conversation Blueprint. A Blueprint typically contains a set of questions, defines conversation boundaries, and gives the AI the necessary information to take a job candidate through the first steps of the recruitment process. Mya then uses that Blueprint to simultaneously talk to any number of candidates about that job. Initially, the Mya team would spend several days going through an extensive process to develop each unique Conversation Blueprint (hundreds of them), but as the company scaled and more customers demanded custom Blueprints, that was no longer sustainable. The entire product relied on quality Blueprints so we needed to address this immediately. We posed the question:

What if we let customers create their own Conversation Blueprints with our derived best practices baked into the creation process?

Before I break this down, here is a sneak peek of the final solution 😉

To kickstart the project, the first thing I wanted to do was delve into how we were currently creating Conversation Blueprints. I quickly learned that I knew nothing about anything. Thankfully our conversation design team was patient and explained their process to me.

A Blueprint is made up of multiple Nodes. Nodes are essentially written questions pre-configured with response handling (instructions on how to handle candidates’ various responses). The precise wording is determined by our linguists. Every Node across all of our customers lives within the Node Library.

Here’s a simple example: You are a recruiter and want to hire a truck driver with Mya’s help. Mya is given a Blueprint that contains a single Node that contains the question, “Do you have a valid driver’s license?” You, the recruiter, determine how the candidate’s answer will be weighted towards their job eligibility, whether it’s a hard requirement or just a preference. If a candidate doesn’t meet a hard requirement, they will be disqualified from the job and notified why. Let’s say a candidate responds with “No, but I can get one.” Mya will not disqualify the candidate but will flag the response for you to look at later, and then Mya moves on with the conversation.

  • Job-to-Blueprint Mapping: For each job, there was a 1-to-1 mapping with a Blueprint. So for every new job opening or outbound initiative (Campaign), a new Blueprint had to be made. Blueprints could be duplicated, but they were not reusable.
  • One Node to rule them all? Not quite. Nodes were not dynamic and the Node Library was crowded with multiple variations of the same questions. You would find 3 separate Nodes like “Do you have at least __ year(s) of experience in this field?”, “How much experience do you have in this field?”, and, “We’re looking for people with at least __ year(s) of experience in this field, do you meet this requirement?“ A new Node had to be created for every new question, even similar questions that needed to be worded differently.
  • Conversations can break: The order of Nodes or even having too many Nodes in a conversation (seven is usually a safe maximum), would affect conversations’ probability of candidate drop off or the conversation “breaking.” Imagine playing 20 questions with a chatbot, except every question is determining how unqualified you are for a job.
  • Node compatibility: The role type (full-time, part-time, salary, hourly, etc.) of the job dictates what Nodes are compatible with the Blueprint. You wouldn’t ask someone interviewing for a full-time salary role what their desired hourly pay is. That would be weird.

All of these discoveries challenged my initial assumptions and would inform my UX decisions.

After discussions with our customers and probably too many internal meetings, we set the course of action to build the Conversation Design Tool (CDT) within our already existing recruiter portal. My first self-assigned action item was to think. More specifically, to think about creating Conversation Blueprints from the perspective of recruiter admins, our expected users.

Some admins were very involved in the conversation design process (the power users) but we expected many to be new to it and AI in general (the noobs). We could assume all users would have adequate knowledge of all Mya use cases due to our customer onboarding process, but we did not expect anyone to have the same training and knowledge as our expert conversation designers. The next question was: how can we transform this complex process into an experience that doesn’t give anyone anxiety?

Knowing all this, we wanted to build something that adopted our tried and true techniques but catered to the recruiter admin’s experience.

Without detailing every step of the first two versions, let me give an overview of what we tried to achieve.

For both iterations, we came up with a multi-step Blueprint creation process. Users could, for the most part, select their desired Nodes (questions) from an expansive list, configure them with the relevant information via a generated form, make comments/requests to the Mya team, and live test the conversation with Mya. The result was a self-made Conversation Blueprint. The full sequences of CDT 1.0 (left) and the improved CDT 1.5 (right) are shown below.

CDT 1.0 never made it into the hands of customers, and CDT 1.5 only made it to our biggest customer, but it was tested with several customers. When we launched CDT 1.5 with our first customer, we watched intently and evaluated Mya’s performance with their self-made Blueprints for 3 months. The Blueprints performed up to standard. It seemed like a success, but we were aware of two things. First, the Mya team was still often providing help in the process (mostly custom requests and questions related to the tool). Second, the user group from this customer was made up entirely of power users. All of them were already comfortable with conversation design in general and had had plenty of time looking at Nodes prior to CDT, so they were quickly able to use the tool effectively. The newer user testers had a harder time and had to deal with some challenges:

  • They were overwhelmed by how many Nodes (questions) they could choose from. They had never had to select pre-written questions for an interview and were accustomed to formulating and writing their own questions. Many just didn’t know where to start when it came to selecting Nodes.
  • It was hard for them to conceptualize how the conversation would flow from start to finish. There were frustrations around not knowing the exact wording of questions when forming the conversation structure.
  • There were still limitations relative to the original hands-on conversation design process. This was noted by all our users.

A major goal of the Conversation Design Tool was to reduce the amount of work on both ends and allow customers to roll out Blueprints faster. CDT 1.5 partially met that goal. It cut down the amount of involvement needed from the Mya team, but there was still a lot of back and forth communication and non-automated processes that cost time. Additionally, the CDT was expected to make Mya, as a complete product, more accessible to a wider variety of customers. What we currently had, really only catered to the customer with the longest history with Mya.

CDT 1.5 didn’t quite make the cut. It was somewhat usable, but still not what we had hoped for.

Our findings from the first two iterations gave us a solid basis for the third and final iteration. The leadership team wanted to invest more engineers and resources into the project. As a result, we no longer had to cut corners and could prioritize a user-friendly experience and features that would, in the long run, optimize time and scalability.

Many months and discussions later, we launched into a third effort. At this point, I had established a consistent look and style for CDT, and most design components had already been developed. This allowed me to do the majority of prototyping with mid-fidelity wireframes in Balsamiq. There were several improvements that we prioritized:

1) Custom Questions and Branching: Allowing users to make custom questions, even if they are just simple yes/no or multiple-choice, was estimated to reduce the need for one-off Node requests by about 75%. We even allowed for simple branching (follow-up questions).

2) Reduce Blueprint Volume: Since Blueprints had a 1-to-1 mapping with job listings and outbound initiatives, tons of Blueprints were needed. We found a way to allow Blueprints to be attached to multiple jobs or reused across multiple Campaigns. This meant redesigning dozens of Nodes so they were capable of pulling in dynamic information. Here's a simple example: take a Node like "Are you at least ___ years old?" The age could be derived from whatever job is attached to the Blueprint and saved a user from having to make several versions of the same Blueprint but with different ages filled in. While designing a conversation, users could determine which fields would be dynamic (pulled from a job) and which fields were non-configurable.

3) Blueprint Assignment Management: With the introduction of dynamic information, users needed a way to configure that information when attaching multiple jobs to a single Blueprint.

4) Blueprint Templates: Blueprints could be created as templates and used as a starting build point for future conversation designs. In CDT 1.0 and 1.5, we gave users pre-made templates but never allowed them to create their own.

5) FAQs: A Blueprint typically contains a set of answers to FAQs that candidates may ask. There are global FAQs that pertain to the company as a whole, and job-specific FAQs that need to be configured on a per Blueprint basis. That job fell to the Mya team, but the process was very simple. Users should be able to get away with just creating several reusable sets of FAQs and light configurations on occasion.

6) Usability: To make the process of selecting and configuring questions less intimidating and more cohesive, I completely reimagined the screen where users design the conversation. Previously I had split the process into three steps: select all the questions, go to the next screen, input the relevant information, go to the next screen, and then finally review the actual wording. This new experience allowed users to do all three things, plus much more, on the same screen.

I had received user feedback that navigating and selecting questions from the Question Library (a.k.a. the list of available Nodes) was overwhelming. Rather than surfacing every available Node, I had users type keywords into a smart search. The most relevant questions would surface and they could view the exact wording and select from there. If they couldn't find a suitable question, they then have the option to create and configure a custom question. The smart search worked well since users already had a general idea of what they wanted to ask, they just needed to type in a few words. For power users, who knew the Question Library well, I included a "View all questions" button that would take them to a screen where they could mass select their desired questions.

So, hopefully, this video makes a little more sense now  🤓

Although the CDT took a while (18 months) to get to a good place, we were quite pleased with where we landed. When Mya was acquired, the feature had been rolled out to the top 3 customers accounting for 90% of revenue and several mid-size customers. It had been receiving positive user feedback and frequent usage. We were able to almost entirely off-board the conversation design team from the support teams for those customers. The success of the tool inspired us to continue developing more self-service features and software integrations and ultimately pushed the company towards being a stand-alone product software, rather than a service.

This was my very first start-to-finish practical UI/UX project, so I certainly learned a lot.

1) Consider ALL the users: It was easy for me to look at our power users and design a tool that would help them quickly get the job done. It took me some time to step into the perspective of someone who was used to recruiting in an entirely different way and had little to no familiarity with the Mya product or AI in general. Creating a less abstract experience let users feel like they were doing what they were good at, just in a different way.

2) Do the extra work: Often when time felt like it was against me, I would cut corners and ignore steps in the design process just so I could present SOMETHING visual in meetings. I now realize that presenting information and findings is equally important, even if it's just words on a slide. I wish I would have taken extra time to do more research, consult more resources, and been intentional about thinking through designs before moving pixels around.

3) Push: Ask for more resources, ask for more support, push the project scope, and present those "moonshot" ideas. I understand getting an MVP out is difficult. And sometimes you just want to get something in front of your users. But I wish I would have pushed my initial ideas more. Even if they felt impossible, they were not impractical. And I found that the majority of my unvoiced ideas in the beginning, eventually ended up getting implemented. We just had to take the long way there.

UI/UX, Product design

Building an AI Conversation Design Tool

We created a tool that allows recruiter admins to give conversational guidelines to an artificial intelligence that talks to job candidates

12 minutes

For 18 months and 3 project iterations, I handled all the UI and visual aspects and heavily contributed to the UX development of this project. I conducted user research internally and with Mya's customers, consulted our solutions architects, and worked closely with the PM, engineers, and our VP of Engineering to define the user experience while simultaneously presenting ideas internally and externally using wirefames and interactive high fidelity prototypes.

Tools

Sketch, Invision, Balsamiq

Simplified, Mya is a chatbot. You know that annoying help chat window that sometimes pops up in the bottom right of websites? That’s not us. Similar, but Mya is better. You’ll see why.

Recruiters can use Mya to help with both inbound and outbound recruiting. Mya can talk to candidates via text or chat widget to gather job-related information, answer questions, keep potential candidate profiles up to date, and do a variety of other things. Behind the scenes, things are quite complex and Mya Systems has spent years considering the ethical and technical aspects of creating a robust AI platform.

Conversations can have many different outcomes, good or bad, so our highest priority was ensuring Mya had quality conversations with each individual it engaged with.

The AI is well crafted by people much smarter than me and tons of data models. It is constantly learning and adapting, but for Mya to have a purposeful conversation with someone (e.g. prescreening a candidate for a job), it needs a Conversation Blueprint. A Blueprint typically contains a set of questions, defines conversation boundaries, and gives the AI the necessary information to take a job candidate through the first steps of the recruitment process. Mya then uses that Blueprint to simultaneously talk to any number of candidates about that job. Initially, the Mya team would spend several days going through an extensive process to develop each unique Conversation Blueprint (hundreds of them), but as the company scaled and more customers demanded custom Blueprints, that was no longer sustainable. The entire product relied on quality Blueprints so we needed to address this immediately. We posed the question:

What if we let customers create their own Conversation Blueprints with our derived best practices baked into the creation process?

Before I break this down, here is a sneak peek of the final solution 😉

To kickstart the project, the first thing I wanted to do was delve into how we were currently creating Conversation Blueprints. I quickly learned that I knew nothing about anything. Thankfully our conversation design team was patient and explained their process to me.

A Blueprint is made up of multiple Nodes. Nodes are essentially written questions pre-configured with response handling (instructions on how to handle candidates’ various responses). The precise wording is determined by our linguists. Every Node across all of our customers lives within the Node Library.

Here’s a simple example: You are a recruiter and want to hire a truck driver with Mya’s help. Mya is given a Blueprint that contains a single Node that contains the question, “Do you have a valid driver’s license?” You, the recruiter, determine how the candidate’s answer will be weighted towards their job eligibility, whether it’s a hard requirement or just a preference. If a candidate doesn’t meet a hard requirement, they will be disqualified from the job and notified why. Let’s say a candidate responds with “No, but I can get one.” Mya will not disqualify the candidate but will flag the response for you to look at later, and then Mya moves on with the conversation.

  • Job-to-Blueprint Mapping: For each job, there was a 1-to-1 mapping with a Blueprint. So for every new job opening or outbound initiative (Campaign), a new Blueprint had to be made. Blueprints could be duplicated, but they were not reusable.
  • One Node to rule them all? Not quite. Nodes were not dynamic and the Node Library was crowded with multiple variations of the same questions. You would find 3 separate Nodes like “Do you have at least __ year(s) of experience in this field?”, “How much experience do you have in this field?”, and, “We’re looking for people with at least __ year(s) of experience in this field, do you meet this requirement?“ A new Node had to be created for every new question, even similar questions that needed to be worded differently.
  • Conversations can break: The order of Nodes or even having too many Nodes in a conversation (seven is usually a safe maximum), would affect conversations’ probability of candidate drop off or the conversation “breaking.” Imagine playing 20 questions with a chatbot, except every question is determining how unqualified you are for a job.
  • Node compatibility: The role type (full-time, part-time, salary, hourly, etc.) of the job dictates what Nodes are compatible with the Blueprint. You wouldn’t ask someone interviewing for a full-time salary role what their desired hourly pay is. That would be weird.

All of these discoveries challenged my initial assumptions and would inform my UX decisions.

After discussions with our customers and probably too many internal meetings, we set the course of action to build the Conversation Design Tool (CDT) within our already existing recruiter portal. My first self-assigned action item was to think. More specifically, to think about creating Conversation Blueprints from the perspective of recruiter admins, our expected users.

Some admins were very involved in the conversation design process (the power users) but we expected many to be new to it and AI in general (the noobs). We could assume all users would have adequate knowledge of all Mya use cases due to our customer onboarding process, but we did not expect anyone to have the same training and knowledge as our expert conversation designers. The next question was: how can we transform this complex process into an experience that doesn’t give anyone anxiety?

Knowing all this, we wanted to build something that adopted our tried and true techniques but catered to the recruiter admin’s experience.

Without detailing every step of the first two versions, let me give an overview of what we tried to achieve.

For both iterations, we came up with a multi-step Blueprint creation process. Users could, for the most part, select their desired Nodes (questions) from an expansive list, configure them with the relevant information via a generated form, make comments/requests to the Mya team, and live test the conversation with Mya. The result was a self-made Conversation Blueprint. The full sequences of CDT 1.0 (left) and the improved CDT 1.5 (right) are shown below.

CDT 1.0 never made it into the hands of customers, and CDT 1.5 only made it to our biggest customer, but it was tested with several customers. When we launched CDT 1.5 with our first customer, we watched intently and evaluated Mya’s performance with their self-made Blueprints for 3 months. The Blueprints performed up to standard. It seemed like a success, but we were aware of two things. First, the Mya team was still often providing help in the process (mostly custom requests and questions related to the tool). Second, the user group from this customer was made up entirely of power users. All of them were already comfortable with conversation design in general and had had plenty of time looking at Nodes prior to CDT, so they were quickly able to use the tool effectively. The newer user testers had a harder time and had to deal with some challenges:

  • They were overwhelmed by how many Nodes (questions) they could choose from. They had never had to select pre-written questions for an interview and were accustomed to formulating and writing their own questions. Many just didn’t know where to start when it came to selecting Nodes.
  • It was hard for them to conceptualize how the conversation would flow from start to finish. There were frustrations around not knowing the exact wording of questions when forming the conversation structure.
  • There were still limitations relative to the original hands-on conversation design process. This was noted by all our users.

A major goal of the Conversation Design Tool was to reduce the amount of work on both ends and allow customers to roll out Blueprints faster. CDT 1.5 partially met that goal. It cut down the amount of involvement needed from the Mya team, but there was still a lot of back and forth communication and non-automated processes that cost time. Additionally, the CDT was expected to make Mya, as a complete product, more accessible to a wider variety of customers. What we currently had, really only catered to the customer with the longest history with Mya.

CDT 1.5 didn’t quite make the cut. It was somewhat usable, but still not what we had hoped for.

Our findings from the first two iterations gave us a solid basis for the third and final iteration. The leadership team wanted to invest more engineers and resources into the project. As a result, we no longer had to cut corners and could prioritize a user-friendly experience and features that would, in the long run, optimize time and scalability.

Many months and discussions later, we launched into a third effort. At this point, I had established a consistent look and style for CDT, and most design components had already been developed. This allowed me to do the majority of prototyping with mid-fidelity wireframes in Balsamiq. There were several improvements that we prioritized:

1) Custom Questions and Branching: Allowing users to make custom questions, even if they are just simple yes/no or multiple-choice, was estimated to reduce the need for one-off Node requests by about 75%. We even allowed for simple branching (follow-up questions).

2) Reduce Blueprint Volume: Since Blueprints had a 1-to-1 mapping with job listings and outbound initiatives, tons of Blueprints were needed. We found a way to allow Blueprints to be attached to multiple jobs or reused across multiple Campaigns. This meant redesigning dozens of Nodes so they were capable of pulling in dynamic information. Here's a simple example: take a Node like "Are you at least ___ years old?" The age could be derived from whatever job is attached to the Blueprint and saved a user from having to make several versions of the same Blueprint but with different ages filled in. While designing a conversation, users could determine which fields would be dynamic (pulled from a job) and which fields were non-configurable.

3) Blueprint Assignment Management: With the introduction of dynamic information, users needed a way to configure that information when attaching multiple jobs to a single Blueprint.

4) Blueprint Templates: Blueprints could be created as templates and used as a starting build point for future conversation designs. In CDT 1.0 and 1.5, we gave users pre-made templates but never allowed them to create their own.

5) FAQs: A Blueprint typically contains a set of answers to FAQs that candidates may ask. There are global FAQs that pertain to the company as a whole, and job-specific FAQs that need to be configured on a per Blueprint basis. That job fell to the Mya team, but the process was very simple. Users should be able to get away with just creating several reusable sets of FAQs and light configurations on occasion.

6) Usability: To make the process of selecting and configuring questions less intimidating and more cohesive, I completely reimagined the screen where users design the conversation. Previously I had split the process into three steps: select all the questions, go to the next screen, input the relevant information, go to the next screen, and then finally review the actual wording. This new experience allowed users to do all three things, plus much more, on the same screen.

I had received user feedback that navigating and selecting questions from the Question Library (a.k.a. the list of available Nodes) was overwhelming. Rather than surfacing every available Node, I had users type keywords into a smart search. The most relevant questions would surface and they could view the exact wording and select from there. If they couldn't find a suitable question, they then have the option to create and configure a custom question. The smart search worked well since users already had a general idea of what they wanted to ask, they just needed to type in a few words. For power users, who knew the Question Library well, I included a "View all questions" button that would take them to a screen where they could mass select their desired questions.

So, hopefully, this video makes a little more sense now  🤓

Although the CDT took a while (18 months) to get to a good place, we were quite pleased with where we landed. When Mya was acquired, the feature had been rolled out to the top 3 customers accounting for 90% of revenue and several mid-size customers. It had been receiving positive user feedback and frequent usage. We were able to almost entirely off-board the conversation design team from the support teams for those customers. The success of the tool inspired us to continue developing more self-service features and software integrations and ultimately pushed the company towards being a stand-alone product software, rather than a service.

This was my very first start-to-finish practical UI/UX project, so I certainly learned a lot.

1) Consider ALL the users: It was easy for me to look at our power users and design a tool that would help them quickly get the job done. It took me some time to step into the perspective of someone who was used to recruiting in an entirely different way and had little to no familiarity with the Mya product or AI in general. Creating a less abstract experience let users feel like they were doing what they were good at, just in a different way.

2) Do the extra work: Often when time felt like it was against me, I would cut corners and ignore steps in the design process just so I could present SOMETHING visual in meetings. I now realize that presenting information and findings is equally important, even if it's just words on a slide. I wish I would have taken extra time to do more research, consult more resources, and been intentional about thinking through designs before moving pixels around.

3) Push: Ask for more resources, ask for more support, push the project scope, and present those "moonshot" ideas. I understand getting an MVP out is difficult. And sometimes you just want to get something in front of your users. But I wish I would have pushed my initial ideas more. Even if they felt impossible, they were not impractical. And I found that the majority of my unvoiced ideas in the beginning, eventually ended up getting implemented. We just had to take the long way there.

View more projects