Revise: Smarter ratings & reviews
NYKAA FASHION | 2023-24
Company overview
Nykaa Fashion is the fashion vertical of Nykaa, India’s leading beauty and lifestyle platform. It offers curated collections across apparel, accessories, and lifestyle for women, men, and kids. With a growing focus on personalization and post-purchase experience, the Ratings & Reviews revamp aimed to enhance trust, boost conversion, and turn more buyers into brand advocates.
Not just stars, but stories
Why we had to rethink reviews from the ground up
Online reviews are powerful—but underutilized. On Nykaa Fashion, less than 4 out of 10 users complete their reviews. We set out to change that—not by pushing harder, but by designing smarter.
Project overview
Online reviews are powerful—but underutilized. On our platform, less than 4 out of 10 users complete their reviews. We set out to change that—not by pushing harder, but by designing smarter.

The brief that started it all
Project brief
This project focused on redesigning the Ratings & Reviews experience on Nykaa’s platform—beginning with how users submit reviews. The aim was to increase review participation, collect richer feedback, and create a more rewarding, user-centered flow.
Scope
Phase 1: Review Collection (User-submitted content)
Phase 2: Review Display in Product Page (Planned for future phase)
Approach
We followed a full-spectrum design process—starting with secondary and primary research, followed by structured ideation, rapid prototyping, and usability testing. Each phase helped validate and refine our direction before we finalized the experience for handoff.
The exisiting review flow
The old flow was long, linear, and lacked delight. Users had to skip each of the 7 questions individually, with no way to submit early. The form felt like a chore, offering no incentive, no personalization, and little motivation to finish


Problem statement
The review experience today feels like a task—too long, not engaging, and unrewarding. We needed to simplify the process, make it more relevant to each user, and turn it into something that felt smooth, meaningful, and worth doing. A well collected review not only helps the reviewer feel heard but also guides future shoppers in making confident purchase decisions.
While the numbers revealed what was going wrong, they couldn’t explain why users weren’t participating. To understand the 'why,' we turned directly to our users.
From interviews to Insights
Real stories, emotional drivers, and insights that shaped our direction.
Research Approach
Primary research: 10 in-depth interviews with users across age groups, regions, and fashion preferences—capturing motivations, frustrations, and behavioral patterns across varied purchase journeys. We included first-time buyers, frequent shoppers, and reviewers vs. non-reviewers to understand both active and silent voices.
Secondary research: Synthesized 10+ industry reports and academic papers on digital reviews, feedback loops, and behavioral triggers—spanning UX best practices, psychological principles (like reciprocity and social proof), and product strategies adopted by leading platforms.
Competitor benchmarking: Studied how 10+ platforms—including Sephora, Myntra, Amazon, Airbnb, Zappos, Nordstrom, ASOS, Flipkart, Google Maps, and TripAdvisor—use structure, incentive, and emotional cues to drive review quality and participation.
Key takeaways from research
We combined deep user interviews with a layered analysis of industry best practices and competitor approaches.

From secondary research & competitor analysis:
Incentivizing feedback through visibility and rewards increases user participation, as seen on platforms like Sephora and Google Maps.
Trust is enhanced when platforms offer structured filters, review breakdowns, and credible user profiles.
Attribute-based review formats enable consistent, actionable feedback collection across product types.
From primary research:
Users are willing to give feedback when the experience is quick, purposeful, and low effort
Visual content like photos and videos makes reviews feel more relatable and trustworthy.
Complex or time-consuming flows lead to significant drop-offs in user participation.
Acknowledging users’ input increases their likelihood to engage and contribute again.

— P2, Female, 26, Delhi
Empathy as a driver — users aren’t writing reviews to help the company, but to help other users.
— P6, Female, 31, Bangalore
Emotionally-driven reviewers who only take action when their experiences are extreme
— P4, Female, 29, Mumbai
Authenticity and honesty — users want to balance overly positive reviews with real experiences.
— P4, Female, 29, Mumbai
Users want contextual, relatable feedback- about quality, fit, material, sizing, etc.
The many faces of a reviewer
User Cohorts (Behavioral Segments)
Users interact with reviews in different ways—some are vocal, others stay silent, and a few only speak when something extreme happens. Mapping these cohorts helped shape solutions that meet a range of behaviors and motivations.
These cohorts helped shape personalization strategies and tone adjustments across the review flow. These cohorts would later become design lenses—guiding feature priorities and tone.
Based on everything we heard we paused to define the core problems clearly before jumping into solutions.
Uncovering the Gaps
A closer look at how users interact with the current review system
Framing the real problem
Reframing the problem statement based on insights
The review flow is tedious and lacks contextual cues, leading to high drop-off and low-quality feedback.
Contributing Factors:
Lack of motivation: No visible reward or recognition for contributing feedback.
High friction: The process isn’t intuitive or guided, leading to a 63% abandonment rate.
Limited content richness: Only 1% of reviews include images, reducing relatability and trust.
Missing context in reviews: Absence of key fashion-specific cues like fit or body type makes feedback less actionable and reduces user incentive to contribute.
Effort vs. value imbalance: Users don’t feel their input creates meaningful impact or visibility.
There are two parts to the project.
(Note: The scope of the project is to focus on the review collection part)

Review Collection
With only 37% completion, we saw the need to simplify reviews and drive action through contextual nudges, smart prompts, and meaningful rewards.

Review Showcase in Product
Reviews are vital for trust, but without clear organization and filters, they lose impact.
With the problem clear, we focused on exploring ideas to address user frustrations and motivations.
From concept to creation
To move from insight to action, I led a 90-minute Crazy 8s ideation workshop with peers & stakeholders.
Initial ideas
How to make sharing feedback easier and more motivating
Feedback form
Voice assistance and translation
Monetary rewards for reviews
Gamification
Leaderboard & badges
Unique review incentives
User feedback prompts & touchpoints
Inform users of their feedbacks
(Click to expand image)
To bring them to life, we translated a few into quick wireframes. Early explorations included concepts like attribute tagging, inline image prompts, and contextual nudges.
Balsamiq wireframs for review collection

(Click to expand image)
Early Concepts and Testing
Building on the early wireframes, I began translating ideas into functional interfaces and interactions—this time with real-world constraints and dev feasibility in mind.
Many of these design choices directly tackled the key problem areas we had defined—from friction and drop-off, to lack of emotional connection and clarity.
Where users spoke back
Evaluating flow clarity, emotional engagement, and feature intent.
We developed two final variations based on feedback from earlier iterations—refining the review meter, introducing conditional rating and attribute sections, and designing a more relatable image upload experience.
User testing insights

Users appreciated how the form adapted based on their rating—making it feel more intuitive and relevant.

Users preferred the tabbed layout—it was simpler, scannable, and reduced overload.

Users found the fun facts engaging. It added personality to the review process and kept them interested.

Familiar formats drive ease. Star ratings are universally recognized, making feedback quick and intuitive for users.

Review meter misunderstood as character limit.

Users expected a reward or acknowledgment after submission.
Where it all came together
The final solution focused on simplicity, relatability, and participation:
Familiar Rating Form: Familiarity plays a key role in shaping the feedback experience. Star ratings are widely used and instantly understood, making them an intuitive way for users to start sharing feedback.
Contextual Prompts: Attribute questions changed with the rating, helping users share more relevant feedback.
Tabbed Layout: A clean, scannable tab design helped users focus and reduced cognitive load.
Fun Facts: Displaying fun facts at the start of the review process sparks curiosity and engages users right away, making the feedback experience more interesting and encouraging participation.
Image Upload Engagement: Users interacted with the media upload feature, finding it easy and meaningful to add photos.
Homepage interaction protopie link
Reflection & Impact
What I learned about motivation, emotion, and honest contribution
Looking back, the design wasn’t just about form simplification—it was about understanding people.
Small moments like 'review facts' and microcopy, relatable visuals made measurable impact on engagement.
Emotional drivers like empathy, honesty, and self-expression were more valuable than incentives alone.
Creating space for user voice meant removing friction and highlighting value at every step.
The foundation is set. The next step: Review Showcase—a phase dedicated to making every voice not just heard, but seen.
Early results from launch show encouraging signs of growth in review engagement and quality
(As of 15 April 2025)