Revise: Smarter ratings & reviews

NYKAA FASHION | 2023-24

Company overview

Nykaa Fashion is the fashion vertical of Nykaa, India’s leading beauty and lifestyle platform. It offers curated collections across apparel, accessories, and lifestyle for women, men, and kids. With a growing focus on personalization and post-purchase experience, the Ratings & Reviews revamp aimed to enhance trust, boost conversion, and turn more buyers into brand advocates.

Not just stars, but stories

Why we had to rethink reviews from the ground up

Online reviews are powerful—but underutilized. On Nykaa Fashion, less than 4 out of 10 users complete their reviews. We set out to change that—not by pushing harder, but by designing smarter.

Project overview

Online reviews are powerful—but underutilized. On our platform, less than 4 out of 10 users complete their reviews. We set out to change that—not by pushing harder, but by designing smarter.

Time Period

3 Months design,

4 Months development

Time Period

3 Months design,

4 Months development

Responsibilities

Project Lead

Design Strategy
Managing Stakeholders
User Research, Interaction,

Prototyping, User testing

Responsibilities

Project Lead

Design Strategy
Managing Stakeholders
User Research, Interaction,

Prototyping, User testing

Team

1 UX Designer, 1 UX Researcher, 1 Visual Designer, 1 PM, 1 Dev for each platform (React, Android, iOS), 1 QA

Team

1 UX Designer, 1 UX Researcher, 1 Visual Designer, 1 PM, 1 Dev for each platform (React, Android, iOS), 1 QA

The brief that started it all

Project brief

This project focused on redesigning the Ratings & Reviews experience on Nykaa’s platform—beginning with how users submit reviews. The aim was to increase review participation, collect richer feedback, and create a more rewarding, user-centered flow.

Scope

  • Phase 1: Review Collection (User-submitted content)

  • Phase 2: Review Display in Product Page (Planned for future phase)

Approach

We followed a full-spectrum design process—starting with secondary and primary research, followed by structured ideation, rapid prototyping, and usability testing. Each phase helped validate and refine our direction before we finalized the experience for handoff.

The exisiting review flow

The old flow was long, linear, and lacked delight. Users had to skip each of the 7 questions individually, with no way to submit early. The form felt like a chore, offering no incentive, no personalization, and little motivation to finish

Problem statement

The review experience today feels like a task—too long, not engaging, and unrewarding. We needed to simplify the process, make it more relevant to each user, and turn it into something that felt smooth, meaningful, and worth doing. A well collected review not only helps the reviewer feel heard but also guides future shoppers in making confident purchase decisions.

While the numbers revealed what was going wrong, they couldn’t explain why users weren’t participating. To understand the 'why,' we turned directly to our users.

From interviews to Insights

Real stories, emotional drivers, and insights that shaped our direction.

Research Approach

  • Primary research: 10 in-depth interviews with users across age groups, regions, and fashion preferences—capturing motivations, frustrations, and behavioral patterns across varied purchase journeys. We included first-time buyers, frequent shoppers, and reviewers vs. non-reviewers to understand both active and silent voices.

  • Secondary research: Synthesized 10+ industry reports and academic papers on digital reviews, feedback loops, and behavioral triggers—spanning UX best practices, psychological principles (like reciprocity and social proof), and product strategies adopted by leading platforms.

  • Competitor benchmarking: Studied how 10+ platforms—including Sephora, Myntra, Amazon, Airbnb, Zappos, Nordstrom, ASOS, Flipkart, Google Maps, and TripAdvisor—use structure, incentive, and emotional cues to drive review quality and participation.

Deep dive into Insights

A snapshot of what you’ll find:

  • Key patterns from primary interviews of Nykaa Fashion shoppers—motives, pain points, and expectations

  • Behavioral themes and emotional triggers across user types

  • Highlights from 10+ competitors with comparative insights

  • Secondary research themes from UX, psychology, and feedback systems

  • Methodology, contradictions, and opportunities—all mapped to design decisions

A snapshot of what you’ll find:

  • Key patterns from primary interviews of Nykaa Fashion shoppers—motives, pain points, and expectations

  • Behavioral themes and emotional triggers across user types

  • Highlights from 10+ competitors with comparative insights

  • Secondary research themes from UX, psychology, and feedback systems

  • Methodology, contradictions, and opportunities—all mapped to design decisions

Key takeaways from research

We combined deep user interviews with a layered analysis of industry best practices and competitor approaches.

From secondary research & competitor analysis:

  1. Incentivizing feedback through visibility and rewards increases user participation, as seen on platforms like Sephora and Google Maps.

  2. Trust is enhanced when platforms offer structured filters, review breakdowns, and credible user profiles.

  3. Attribute-based review formats enable consistent, actionable feedback collection across product types.

From primary research:

  1. Users are willing to give feedback when the experience is quick, purposeful, and low effort

  2. Visual content like photos and videos makes reviews feel more relatable and trustworthy.

  3. Complex or time-consuming flows lead to significant drop-offs in user participation.

  4. Acknowledging users’ input increases their likelihood to engage and contribute again.

"It’s not for the brand. It’s so someone else doesn’t go through what I did."

"It’s not for the brand. It’s so someone else doesn’t go through what I did."

"It’s not for the brand. It’s so someone else doesn’t go through what I did."

— P2, Female, 26, Delhi

Empathy as a driver — users aren’t writing reviews to help the company, but to help other users.

"I only leave a review when I’m either really happy or really angry."

"I only leave a review when I’m either really happy or really angry."

"I only leave a review when I’m either really happy or really angry."

— P6, Female, 31, Bangalore

Emotionally-driven reviewers who only take action when their experiences are extreme

"Sometimes reviews feel sugar-coated. I just wanted to tell people what actually happened."

"Sometimes reviews feel sugar-coated. I just wanted to tell people what actually happened."

— P4, Female, 29, Mumbai

Authenticity and honesty — users want to balance overly positive reviews with real experiences.

"Most reviews just say 'nice'—that doesn't help me decide. I want real details."

"Most reviews just say 'nice'—that doesn't help me decide. I want real details."

— P4, Female, 29, Mumbai

Users want contextual, relatable feedback- about quality, fit, material, sizing, etc.

The many faces of a reviewer

User Cohorts (Behavioral Segments)

Users interact with reviews in different ways—some are vocal, others stay silent, and a few only speak when something extreme happens. Mapping these cohorts helped shape solutions that meet a range of behaviors and motivations.
These cohorts helped shape personalization strategies and tone adjustments across the review flow. These cohorts would later become design lenses—guiding feature priorities and tone.

  • The Deep Diver

    Actively searches and reads multiple reviews before making a decision. They rely heavily on others' experiences to validate their choices.

  • The Consistent Contributor

    Regularly leaves reviews, regardless of the product experience—often feels a sense of responsibility or habitually contributes.

  • The Independent Buyer

    Rarely reads reviews. If the product seems convincing through visuals or description, they go ahead and purchase without external opinions.

  • The Complaint Reviewer

    Only leaves a review when the experience is bad. Reviews are often emotional, focused on highlighting flaws or disappointments.

  • The Extremes-Only Reviewer

    Gives feedback only when their experience is either amazing or awful—never in between.

  • The Silent Shopper

    Never leaves a review, regardless of how good or bad the experience was. They're here just to shop and move on.

  • The Incentive Completer

    Finishes reviews primarily for rewards or because the platform asks—minimal effort, often generic input.

Based on everything we heard we paused to define the core problems clearly before jumping into solutions.

Uncovering the Gaps

A closer look at how users interact with the current review system

63%

abandonment rate

63%

abandonment rate

63%

abandonment rate

1%

of reviews included images

1%

of reviews included images

1%

of reviews included images

24-48hrs

Most users submit reviews within 2 days of delivery

24-48hrs

Most users submit reviews within 2 days of delivery

24-48hrs

Most users submit reviews within 2 days of delivery

Most reviews were just star ratings

Most reviews were just star ratings

Most reviews were just star ratings

Lack of context—like fit or body type—led to low fashion review engagement.

Lack of context—like fit or body type—led to low fashion review engagement.

Lack of context—like fit or body type—led to low fashion review engagement.

Framing the real problem

Reframing the problem statement based on insights

The review flow is tedious and lacks contextual cues, leading to high drop-off and low-quality feedback.


Contributing Factors:

  • Lack of motivation: No visible reward or recognition for contributing feedback.

  • High friction: The process isn’t intuitive or guided, leading to a 63% abandonment rate.

  • Limited content richness: Only 1% of reviews include images, reducing relatability and trust.

  • Missing context in reviews: Absence of key fashion-specific cues like fit or body type makes feedback less actionable and reduces user incentive to contribute.

  • Effort vs. value imbalance: Users don’t feel their input creates meaningful impact or visibility.

There are two parts to the project.

(Note: The scope of the project is to focus on the review collection part)

Review Collection

With only 37% completion, we saw the need to simplify reviews and drive action through contextual nudges, smart prompts, and meaningful rewards.

Review Showcase in Product

Reviews are vital for trust, but without clear organization and filters, they lose impact.

With the problem clear, we focused on exploring ideas to address user frustrations and motivations.

From concept to creation

To move from insight to action, I led a 90-minute Crazy 8s ideation workshop with peers & stakeholders.

Initial ideas

How to make sharing feedback easier and more motivating

  1. Feedback form

  2. Voice assistance and translation

  3. Monetary rewards for reviews

  4. Gamification

  5. Leaderboard & badges

  6. Unique review incentives

  7. User feedback prompts & touchpoints

  8. Inform users of their feedbacks

(Click to expand image)

To bring them to life, we translated a few into quick wireframes. Early explorations included concepts like attribute tagging, inline image prompts, and contextual nudges.

Balsamiq wireframs for review collection

(Click to expand image)

Early Concepts and Testing

Building on the early wireframes, I began translating ideas into functional interfaces and interactions—this time with real-world constraints and dev feasibility in mind.
Many of these design choices directly tackled the key problem areas we had defined—from friction and drop-off, to lack of emotional connection and clarity.

Where users spoke back

Evaluating flow clarity, emotional engagement, and feature intent.

We developed two final variations based on feedback from earlier iterations—refining the review meter, introducing conditional rating and attribute sections, and designing a more relatable image upload experience.

We ran unmoderated testing with 10 users on both versions on UT tool.

We ran unmoderated testing with 10 users on both versions on UT tool.

We ran unmoderated testing with 10 users on both versions on UT tool.

User testing insights

Users appreciated how the form adapted based on their rating—making it feel more intuitive and relevant.

Users preferred the tabbed layout—it was simpler, scannable, and reduced overload.

Users found the fun facts engaging. It added personality to the review process and kept them interested.

Familiar formats drive ease. Star ratings are universally recognized, making feedback quick and intuitive for users.

Review meter misunderstood as character limit.

Users expected a reward or acknowledgment after submission.

Where it all came together

The final solution focused on simplicity, relatability, and participation:

  • Familiar Rating Form: Familiarity plays a key role in shaping the feedback experience. Star ratings are widely used and instantly understood, making them an intuitive way for users to start sharing feedback.

  • Contextual Prompts: Attribute questions changed with the rating, helping users share more relevant feedback.

  • Tabbed Layout: A clean, scannable tab design helped users focus and reduced cognitive load.

  • Fun Facts: Displaying fun facts at the start of the review process sparks curiosity and engages users right away, making the feedback experience more interesting and encouraging participation.

  • Image Upload Engagement: Users interacted with the media upload feature, finding it easy and meaningful to add photos.

Each final design decision was tied back to one or more behavioral cohorts identified earlier:

Each final design decision was tied back to one or more behavioral cohorts identified earlier:

Homepage interaction protopie link

Reflection & Impact

What I learned about motivation, emotion, and honest contribution

Looking back, the design wasn’t just about form simplification—it was about understanding people.

Small moments like 'review facts' and microcopy, relatable visuals made measurable impact on engagement.

  • Emotional drivers like empathy, honesty, and self-expression were more valuable than incentives alone.

  • Creating space for user voice meant removing friction and highlighting value at every step.

The foundation is set. The next step: Review Showcase—a phase dedicated to making every voice not just heard, but seen.

How did it impact the business?

How did it impact the business?

Early results from launch show encouraging signs of growth in review engagement and quality
(As of 15 April 2025)

"If this redesign gets even 10% more customers to add honest photos and useful tags, the downstream impact on fashion sales and returns could be massive"

"If this redesign gets even 10% more customers to add honest photos and useful tags, the downstream impact on fashion sales and returns could be massive"

"If this redesign gets even 10% more customers to add honest photos and useful tags, the downstream impact on fashion sales and returns could be massive"

— Internal Product Stakeholder, Nykaa Fashion

— Internal Product Stakeholder, Nykaa Fashion

Deep Dive into Insights

A snapshot of what you’ll find:

  • Key patterns from primary interviews of Nykaa Fashion shoppers—motives, pain points, and expectations

  • Behavioral themes and emotional triggers across user types

  • Highlights from 10+ competitors with comparative insights

  • Secondary research themes from UX, psychology, and feedback systems

  • Methodology, contradictions, and opportunities—all mapped to design decisions