Argument Analytics -
Which Tagging System?
Leading UX for a new feature;
Getting the tagging system right through data exploration,
& rapid prototyping & testing

In early 2020, I led design for Context's newest feature, Argument Analytics.
Our users are attorneys, who use Argument Analytics to find documents written by a courtroom opponent from a previous, similar case.
How Attorneys Research Opponents Today
Finding documents written by opponents today is not easy.
Our users had to go to three different products to find what they were looking for, and their workflow included work-arounds like copying and pasting long lists of case numbers.
From Three Products to Three Screens
I simplified this process considerably.
Instead of visiting three different products, our atttorney-users can simply type their opponent's name into the search engine and be brought to an overview page with the opponent's bio and experience.
From there, they can visit Argument Analytics to find a list of documents written by that opponent, then dive deeper by going to the document view.

Attorney Analytics Process Details:
Designing an Effective Tagging System
.png)
Argument Analytics's result list page surfaces cards with “argument passages”—snippets of text written by a user’s courtroom opponent.
However, there were two potential ways to tag the argument passage (Topics and Concepts). LexisNexis has many databases, and figuring out which to use is often a challenge.
There was a lot at stake too--one of our new products was utilizing Concepts, but users were used to the older tagging system of Topics.
So, we needed to figure out: of Topics and Concepts, which would help users understand an argument passage at a glance?
A Bit About Topics & Concepts
Both are ways of tagging an argument passage.
When is Tagging Important?
Using initial research as a guide, I ran a workshop with the team and created a user journey of the process attorneys use to find and research documents written by an opponent.

The user has a large list of argument passages to go though and has to decide which documents deserve a deeper dive. Tags help them sort through this list faster.
Creating this user journey kept us on track through the development process.
Designing with Data
At this point, I knew that Topics or Concepts could make or break having a great experience for our users.
The next step was to collaborate with the data science team to break down Topics and Concepts into meaningful demographics.
This helped me define the pros and cons of each tagging system, and it allowed me to make sure the data science output was tailored to the users' needs.
It also allowed me to start thinking about the unique design challenges associated with Topics and Concepts, and to begin ideating solutions.
Our data scientist provided the below breakdown of our attorney database corpus:
I asked the data science team to focus on the 26% of attorneys in our database who had authored between 50 & 100 documents, because
- It was a pretty common instance, and
- Of the other, more common demographics, it was the most difficult design challenge.
Why was this a difficult design challenge?
That 26% percent of attorneys might have:
- 50 - 100 documents
- 11 - 12 tags per document
- 5,000 - 6,000 total documents
That's a lot to consider when designing an interface!
.png)
From there, I asked the data scientist to pick an attorney from that 26% and find:
- The total number of argument passages from all the documents the attorney had authored.
- The amount of Topics for each argument passage.
- The amount of Concepts on each argument passage.
I used this data to design a Topics version and a Concepts version of our solution.
But... from this exploration, we still had big questions. We still weren't quite sure whether Topics or Concept would make the better tagging system.
Rapid Prototyping & Testing
Since there wasn't a clear answer after the data deep dive and design ideation, we decided to test with users.
I built two prototypes to test and collaborated with the data science team to populate it with realistic data.
The prototypes featured a parred down design, so we could get users to focus only on the tagging system when giving feedback.
We tested with five attorneys who researched opposing counsels’ documents as part of their regular work.
Four of the five preferred Topics to Concepts, and the fifth said he had no strong preference.
This was a bit surprising as we speculated the benefit of tagging “facts” would be a big bonus with Concepts, but users weren’t interested in that.
What they really appreciated was seeing the big Topic the case was about (i.e., “copyright law”), and then being able to drill down the specifics (i.e., “compulsory damages”).
Adding to the Design System
Now that we were moving forward with Topics for our tagging system, it fell to me to refine the visual style and create the interaction design.
As seen below, our old way of showing Topics wasted space and visually was very clunky.
Old Way of Displaying Topic Tags
.png)
I explored ways of hiding the extended Topic hierarchy and indicating that there was additional information ready to be accessed.
A hidden topic hierarchy was a new addition to our design system, so I consulted frequently with other designers and engineers and iterated on their feedback.
.png)
I ultimately came up with two different designs for hiding the extended hierarchy, and I quickly prototyped and tested them with 5 users.
Ultimately, users overwhelmingly favored the design below:
New Way of Displaying Topic Tags

"I like how everything is hidden until I need it."
"The arrow tells me there is more information there."
Impact & Reflections
I collaborated on all phases of research, using my legal knowledge to help craft test plans and distill insights.
I ran a workshop and defined the user journey. This helped frame the problem and kept the team on track throughout the development process.
I also collaborated heavily with the data science team to direct data modeling efforts between Topics and Concepts. I took the findings from these activities and incorporated them into the design. Because I insisted on using realistic data in my prototypes, the team could be confident about moving forward with Topics.
I contributed to our shared design system by collaborating on the new Topics design with other designers and engineers, and tested to make sure it was intuitive and helpful to users.