A failed Netflix homepage redesign experiment that no one even noticed

Last month, while cleaning cookies, I took two screenshots of the Netflix homepage. This is how I accidentally discovered an A / B design test being conducted. While Netflix is ​​probably too big a company to publicize their experiments (although they demonstrate their amazing process), I knew it was only a matter of time before they made a decision and design. And so I waited. When February came and all traces of the experiment disappeared, it became clear that the decision was made – they rejected version B, keeping the existing version (A). Here are some possible explanations for why this might have happened (with links to the tested interface patterns, of course).

Changes

A word of caution: take your time to conclude that all the attributes of version A are positive. Remember that while such experiments are very valuable, they suffer from dilution of causation by combining several variables. In the end, we cannot tell which of these individual changes were positive or negative, because they were all grouped together. To learn more about the specific impacts of individual changes, you need to test them in isolation, which is why monitor individual patterns. I think that’s why Netflix chose version A.

  1. Headline: “Watch the Latest Movies” Vs. “Watch Anywhere”
    A headline or value proposition in version A promises viewers to “see what’s next” rather than “watch on any device.” With Option A remaining, this may be a weak signal that new or recent content is valued more than how and where to watch it. Viewers probably already expect to be able to watch videos on any device. We are tracking the series header patterns right here
  2. Notable thumbnails of movie previews – similar to pattern # 95
    One of the first major differences between both versions is that version A has more prominent movie thumbnails. They are located at the top and take up more space than option B, where they are located below. Using realistic imagery to reinforce the value proposition (“See what happens next”) can be more powerful than abstraction in this case.
  3. Consecutive Vs. Various calls to action
    Option B shows 4 options for the main call to action. It seems to me that such a variety of messages can increase the level of uncertainty: can pressing individual buttons have different consequences for the user (can users subconsciously think that “join, free for a month” is contrary to the call “try for free”)? I think it would be a great complementary experiment.
  4. Ghost Buttons
    We started tracking the effects of the outline buttons as a separate pattern.
  5. The answer to the question about the price is VS. Increasing uncertainty
    While Option A doesn’t show the price right away, it makes it available with just one click on the Pick your price tab. Version B mentions “low fees,” but in reality users remain in a state of alarming uncertainty. We have begun collect price patterns for further analysis
  6. Unsubscribe visualization
    I love how version A emphasizes how and where users can easily unsubscribe if they wish. This increases user confidence. I will definitely turn this into a unique pattern to inspire future experimentation.

Share your thoughts

What else do you think could have influenced the choice of option A? Share your thoughts. Yes, and of course, if you are interested in learning which patterns are usually successful, please check out the many evidence-based papers that we actively publish for you.

Leave a Reply

Your email address will not be published. Required fields are marked *