Skip to content

The Business of Social Games and Casino

How to succeed in the mobile game space by Lloyd Melnick

Month: September 2020

The difference between great executives and everyone else

The difference between great executives and everyone else

I came across a great blog post on The Difference Between Amateurs and Professionals and I found it a perfect proxy for highlighting what separates effective and great executives from everyone else.

It is impossible to overstate the value of a great executive, as each of their decisions is amplified over the entire company. I have seen leaders destroy companies, and in so doing cost many people their jobs and health, and have seen others grow companies to unexpected heights, not only creating billions in value but also jobs and other opportunities. Below are the key differences between amateur and professional athletes that also can be applied in the business world.

Amateurs focus on being right. Professionals focus on getting the best outcome.

The best business leaders are focused on delivering great outcomes (successful new products, profits that exceed expectations, gaining market share, etc.) and do not care if they hit these outcomes because of a “brilliant” strategy they came up with or a process that was built without their input. They are focused on results, not sound bites that sound impressive.

Amateurs think good outcomes are the result of their brilliance. Professionals understand when good outcomes are the result of luck.

Related to the above point, successful executives understand that luck plays a role in success (and failure). It comes down to Heisenberg’s Uncertainty Principle, the universe is uncertain and there is a range of likely outcomes. Given that there are multiple outcomes from any action, luck plays a (big) role into whether the outcome is good or bad no matter the choice (over time, though, you will have a big impact on the set of outcomes if you regularly pursue the ones most likely to yield a positive result). Conversely, sometimes you will make bad decisions and have a great outcome, a top executive will understand they got lucky and not feel they are brilliant.

Amateurs show up to practice to have fun. Professionals realize that what happens in practice happens in games.

This is one of the lessons I have learned from sports that is critical to business success. It is also called practice how you want to play. People will often take shortcuts when they feel it will not be in the final product, when they are preparing marketing collateral, when they are rehearsing conversations with VIPs, etc., but by not focusing at this stage the final product is often not polished. They may not spend much time preparing for a meeting, presentation or sales call. By practicing how you want to play, you and your team are always prepared to deliver an optimal performance.

Practice Play

Amateurs focus on tearing other people down. Professionals focus on making everyone better.

Only insecure business people feel they have to belittle their colleagues and employees. They tear people down in an effort to elevate themselves; they think they will look better in comparison if their colleagues look bad. Conversely, great executives are focused on making everyone around them better and realize they cannot perform well if their colleagues do not.

Amateurs blame others. Professionals accept responsibility.

The best executives follow Truman’s doctrine that “The Buck Stops Here.” Even if an executive was not directly responsible for a failure, the successful ones take on that responsibility and determine ways to avoid it in the future. Allocating blame does not get your business closer to its goal; what does is accepting blame and coming up with next steps does. Top executives also never blame their team, they own any mistake their team makes because they failed to put the processes, training or direction in place to avoid the mistake.

Buck stops here

Amateurs give up at the first sign of trouble and assume they’re failures. Professionals see failure as part of the path to growth and mastery.

Rather than avoid failure, successful executives pursue it. Not that they want any initiative to fail, but embracing failure ensures they are pushing the limits to grow and improve their business. If you never fail, you are never trying anything new.

Amateurs focus on identifying their weaknesses and improving them. Professionals focus on their strengths and on finding people who are strong where they are weak.

This is a lesson I have stressed with both my children and teams. If you focus on eliminating your weaknesses and trying to do everything at a passable level, you are not doing anything special. Mike Trout does not spend two hours a day working on pitching, instead he takes hours and hours of batting practice even though he is the best hitter in baseball.  You are not bringing anything unique that will elevate your company to greatness. Instead, looking at where you excel and double down on it, use your super-powers to elevate your company above the competition. Then bring in others to compensate for your weak points, it is a show of strength to compensate for weaknesses by adding complimentary pieces to your team.

mike

Amateurs think the probability of them having the best idea is high. Professionals know the probability of that is low.

The most effective executives do not try to be the smartest person in the room. While to amateurs it may feel that the strategy of highlighting their intelligence is important for success, the opposite is true. Top executives focus on trying not to be the smartest person in the room and harness the power of tens or hundreds of colleagues, getting better input, more creative ideas and improved team performance.

Amateurs think in absolutes. Professionals think in probabilities.

As discussed above, the universe is uncertain and there is a range of likely outcomes. Rather than trying to get every decision right, what great executives do is maximize their expected value (the sum of all possible values each multiplied by the probability of its occurrence) by playing the odds. While great leaders will have some wins and losses, over time by focusing on the best expected outcome (not one they are sure will happen) their company will outperform competitors.

Amateurs stop when they achieve something. Professionals understand that the initial achievement is just the beginning.

I am not a big fan of celebrations because you should always strive for more. Business is not finite, you do not win and go home. Instead you want to continue growing. Even in sports, I remember reading how the Patriots coach watched film the night he won the Super Bowl to prepare for the next season.

Amateurs have a goal. Professionals have a process.

The New England Patriots are a perfect example of this principle. What it comes down to is building a good process to ensure continued growth and success, whether the goal is hitting revenue targets or winning football matches. If you have a system that can weather external shocks you will achieve this goal rather than spending your team in a bar rationalizing why you failed and your competitors succeeded.

Amateurs focus on the short term. Professionals focus on the long term.

Being a great business executive is not about having a blow out quarter or a good product launch, it is about showing sustained success. Like in chess, do not focus on the next move or the next five moves but the overall strategy that will help you consistently outperform (or avoid) the competition.

Putting it all together

A great business leader does not only follow these practices themselves, but pushes everyone around them to also embrace these concepts. Bill Belichick is not one of the greatest of all time simply because he followed these concepts but because he forced his coached and players to live by these beliefs.

Key takeaways

  • There are many similarities between what separates a professional athlete from an amateur and what separates a great business executive from everyone else.
  • The best executives focus on getting the best outcome, not being right.
  • The best executives accept responsibility and do not blame others.

Share this:

  • Facebook
  • LinkedIn

Like this:

Like Loading...
Author Lloyd MelnickPosted on September 30, 2020September 30, 2020Categories General Social Games Business, General Tech Business, Lloyd's favorite postsTags leadership1 Comment on The difference between great executives and everyone else

How to manage your customer service to have the biggest impact on your profits

How to manage your customer service to have the biggest impact on your profits

I have spent the last few months studying how to provide an exceptional customer experience, largely as a way to help grow gaming businesses. This learning has helped me identify best practices in creating a WOW experience but my perception of the opportunity changed after reading The Effortless Experience by Matthew Dixon. Unlike many of the other customer service (CS) books that focus on how to provide exceptional experience, Dixon’s book is data driven and challenges some basic assumptions.

In particular, his work shows that exceptional service does not drive the KPIs that matter (revenue and loyalty). Instead, these engagement KPIs are actually most impacted by how much effort a customer has to expend to achieve their goals while high effort translates to disloyalty. As the ultimate goal is to increase lifetime value (LTV) of your customers, and retention is the biggest driver of LTV, limiting disloyalty will have a much bigger impact on your profitability than creating WOW experiences. Dixon writes, “[c]ustomer service drives disloyalty, not loyalty. The average service interaction is four times more likely to make a customer disloyal than to make them loyal…. While most companies have for decades been pouring time, energy, and resources into the singular pursuit of creating and replicating the delightful experience for their customers, they’ve ironically missed the very thing customers are actually looking for—a closer-in, more attainable, replicable, and affordable goal that’s been sitting right in front of them all this time: the effortless experience.”

The beauty of The Effortless Experience is that it is data driven, not anecdote driven. Dixon analyzed more than 97,000 customers and found that there is virtually no difference between the loyalty of those customers whose expectations are exceeded and those whose expectations are simply met. He defined loyalty by measuring three behaviors

  • Repurchase (customers continue to buy from your company)
  • Share of wallet (customers buy more from you over time)
  • Advocacy (customers say good things about your company to family, friends, coworkers, even to strangers).

When Dixon looked at data from the 97,000 customers, “rather than a ‘hockey stick effect’ — where loyalty skyrockets upward — loyalty actually plateaus once customer expectations are met.”

Why customer delight does not work

Since the data shows that great service is not going to create better loyalty, and thus improve your customer lifetime value (LTV), the next step of analysis is how customer service can improve LTV. What Dixon found is that a customer service interaction is four times more likely to drive disloyalty than to drive loyalty, so you want CS to avoid leading to churn.

This data driven insight resonated upon reflection with my personal experiences. I could think of many companies I stopped purchasing from due to a bad or disappointing CS experience (online retailers, restaurants, department stores and even a car company) but it is very hard to think of one I am loyal to because their service went above and beyond what I expected. Dixon writes,“you could probably fill up a list a mile long with companies you’ve stopped doing business with because of the bad service you’ve experienced: the cable company that makes you take a day off from work because they can’t do better than an all-day service window, the dry cleaner that ruined your favorite suit and refuses to reimburse you.”

What customers want

While customer service cannot make customers more loyal by providing incredible experiences generally, it can help retention by solving customers’ problems. Dixon’s data shows that when something goes wrong, “the overriding sentiment is: Help me fix it. No need to dazzle me, please just solve the problem and let me get back to what I was doing before.”

Reinforcing this finding, Dixon’s data shows that 94 percent of customers who had low-effort experiences reported that they would repurchase from the company, but only four percent of customers experiencing high-effort interactions planned to make another purchase. What is more, 88 percent of customers with low-effort experiences planned to increase spend with the company, while only four percent of customers with high-effort experiences were going to increase spend. This phenomenon extends to word of mouth, where only one percent of all customers with low-effort experiences said they would spread negative word of mouth about the company but 81 percent of customers with high-effort experiences said they would spread adverse sentiment.

A strategy focused on customer delight does not work because delighting customers is rare, and even when delight does occur, it does not make customers much more loyal than simply meeting their expectations does. It does not work because customer service interactions are four times more likely to drive disloyalty than loyalty. Operationally, since delight does not work you should focus your resources, investments, performance metrics, and incentives on reducing and eliminating the sources of customer effort that make customers disloyal.

Where to focus your customer service strategy

As the data shows customers are not looking to be delighted, you should focus your customer service on fixing things when they go wrong. Rather than trying to get customers to feel you exceeded their expectations, you should be getting customers to think you made it easy to resolve their issue and avoid follow-ups.

To achieve this goal and reduce the effort (and perceived effort) your customer must exert, Dixon identified four best practices:

  1. Low-effort companies minimize channel switching by boosting the “stickiness” of self-service channels, avoiding customers contacting CS in the first place.
  2. When customers are forced to Live Chat or email or call, low-effort companies do not just resolve the current issue for a customer; but power their reps to head off the potential for subsequent calls. This is done through employing next issue avoidance practices. Low-effort companies understand that first contact resolution is not the goal —
    it is only a step in the direction toward more holistic, event-based issue resolution.
  3. Low-effort companies train their CS team to succeed on the soft side of the service interaction. Rather than soft skills that are about being nice and friendly, agents are trained to actively manage the customer interaction through experience engineering (I will elaborate later).
  4. Companies that understand the Effortless Experience empower their agents to deliver a low-effort experience with incentive systems that value the quality of the experience over speed.

Even if you understand the value of a low-effort experience, many companies fail because they try to focus on everything, creating exceptional as well as delivering low-effort. As I have previously written, when there are multiple goals you fail to achieve any. Thus, you need to align your CS org on providing the effortless experience. If you are correctly optimizing for loyalty, you need to focus on finding ways to eliminate or reduce the hassles, hurdles and extra customer effort that leads to disloyalty.

KPIs, good and bad

To implement and optimize a successful strategy, you need to understand and track the KPIs that lead to the desired outcome. In the world of customer service, many of the commonly used KPIs and perceived best practices do not correlate with an effortless experience of increased loyalty. Dixon found virtually no statistical relationship between how a customer rates a company on a satisfaction survey and their future customer loyalty. He also looked at arguably the most common customer service KPI, CSAT score, and found that the data shows a strong CSAT score is not a very reliable predictor for whether customers will be loyal: whether they will repurchase, increase spend and say good things about your company to friends, family, and coworkers.

Dixon points out that it is unfair (and useless) to ask customers if their issue is fully resolved. They will not know that they need to start another ticket in a day or a week. Repeat contacts are by an order of magnitude the single biggest driver of customer effort. Having to contact a company again because an issue was not fully resolved is a customer experience killer and quite expensive.

To highlight how traditional CS measures fail, Dixon also uses an example of a customer whose problem was fully resolved by a rep who went above and beyond (that sounds great, but it tests relatively neutral for increased future loyalty). Unfortunately, this was the second time the customer called about that issue (a huge negative). If you were listening to this call, you would have to conclude, “We did a great job there.” But since it took the customer two tries to get to that moment, and knowing the huge negative impact that repeat contacts have on the customer experience, this person is still very likely to end up more disloyal. Thus it means there is less of a chance the customer will repurchase, less of a chance they will spend more, and a greater chance that they will say negative things to other people, despite the fact that the agent who eventually solved his problem went above and beyond. In this example, if you simply classify this experience as positive, that does not indicate whether it benefited or hurt your company.

The core metric to measure effectively whether your team is delivering an effortless experience is CES (customer experience score). When Dixon compared CES to CSAT, he found CES was 12 percent more predictive of customer loyalty. The most recent CES metric is based on a statement, “the company made it easy for me to handle my issue,” after which the customer is asked to answer (on a 1–7) whether they agree or disagree with that statement. You should then review CES against a normal distribution (10–20 percent of interactions would score as very high-or very low-effort, but most would be somewhere around the mean). Looking at the distribution to understand areas of opportunity can be far more helpful than just considering how your average CES compares to competitors.

While CES should be at the core of measuring your CS efforts, a robust customer effort measurement system includes three components. First, at the top of the pyramid, you want to understand the customer’s overall loyalty to you as a company, I am a big proponent of using NPS.

Following NPS you want to understand the amount of effort in the service transaction. As discussed above, CES is a good way to measure that. Dixon also recommends “that companies cross-check their CES results by looking at some of the operational data that underpin effort& #8212 for instance, number of contacts to resolve an issue.” CES provides a formidable indicator of transactional customer loyalty, clearly highlights friction points in the customer experience and helps you spot customers at risk of defection due to high-effort service interactions.

According to Dixon, “the next level down is to understand how the customer’s service journey unfolds — in other words, the number and type of touchpoints they used to resolve their issue, in what sequence those service touchpoints occured (e.g., did the customer just call the contact center or had they first visited the web site?), and the discrete customer experience within each channel (for example, assessing the clarity of information delivered by a service rep or the ease of finding information on the web site).”

Key tactics to achieving an effortless experience

While Dixon provides compelling evidence in in the value of creating an effortless experience and there are KPIs that can deliver an understanding of how well you are doing, the key to success (in everything) is execution. To create an effortless experience, you need to minimize channel switching, avoid repeat contacts, appropriately “engineer” the customer service interaction experience, build the control quotient, create the right culture and optimize the purchase experience.

Slide1

The danger of channel switching

Channel switching is when a customer initially attempts to resolve an issue through self-service, only to have to then send an email or pick up the phone and call, and it has a disastrous impact on customer loyalty. Each time a customer switches channels, it has a significant negative impact on customer loyalty.

One of the core issues leading to channel switching is that many companies, mistakenly, believe they know how customers want to be served, but they are wrong. Customers do not prefer high touch interactions (Live Chat or phone), they see just as much value in self-service as they do in phone interactions. Dixon found that executives, however, expected customers to prefer phone: a 2.5-to-1 margin in favor of phone service.

Customers who attempt to self-serve but are forced to pick up the phone are ten percent more disloyal than customers who can resolve their issue in their channel of first choice. 58 percent of customers who are forced to switch from web to phone and fall into the “lose-lose” scenario costing companies more to serve and end up being less loyal as a result.

To avoid channel switching, you need to create a robust self service system. Dixon writes, “the more controllable drivers of channel switching (47 percent in B2C settings and 37 percent in B2B) can be categorized into three groups: The customer couldn’t find the information they needed. The customer found the information, but it was unclear. The customer was simply using the web site to find the phone number to call the company.”

As I have written frequently, more choices often results in lower satisfaction or performance (choice overload) and it is also a problem when designing customer service. Based on Dixon’s review of CS data, it “became clear that the variety of options to resolve an issue — all of which were presumably added in an attempt to improve the customer experience — were actually detracting from it. It’s an illustration of what’s known as “the paradox of choice”….[C]hoice is not nearly as powerful as we might have expected. Instead, guiding customers to the pathway that will require the least amount of effort is much more likely to mitigate disloyalty and create the best experience.”

Another cause of channel switching is when customers do not understand the self-service information. The goal is not simply to get customers to try self-service, it is about getting them to stay in self-service. Dixon suggests several ways to mitigate this problem:

  • Simplify language. Rather than being creative or trying to show how sophisticated your company is, write copy in a way that is very simple and easy to understand. A good way to check your copy is the Gunning Fog Index, your text should score an 8 or 9.
  • Eliminate null search results. Look for customer searches that yielded no responses as well as low-relevance searches. You will probably find that customers often use different words than what you used
  • Chunk related information. Chunking is condensing related information and spacing it apart from other text, allowing readers to scan content easily.
  • Avoid jargon. Many companies use phrases and words common to the company or industry but unknown to the customer (in the casino space, a word like “hold”). Scan your web site pages and FAQs carefully for internal jargon, industry lingo, and terms that would generally confuse the average customer.
  • Eliminate what is not vital. Most service sites fail not because they lack functionality and content, but because they have too much of it. Dixon argues, “the key to mitigating channel switching is simplifying the self-service experience.”

Avoid repeat contacts

The second key to creating an effortless customer experience is helping your customers avoid repeated contact with customer service. Repeat contacts are the single biggest driver of customer effort (and it is not even close). Needing to call a company back or send another email or start a new live chat because an issue was not fully resolved is a customer experience killer and hugely costly.

In Dixon’s research, he found a huge disconnect from what companies were judging as resolved and what customers experienced. Customers reported, on average, that only 40 percent of their issues are resolved in the first contact, which meant that an additional 30–40 percent of issues in which customers would disagree with the companies’ assessment that the problem was solved. This disconnect is driven by the company fixing the explicit issue but leaving implicit issues.

There are two main types of implicit issues driving these repeat contacts that companies are missing. The first are adjacent issues, which according to Dixon’s data account for 22 percent of all callbacks. These are downstream issues that might initially seem unrelated, but are ultimately connected to the first thing the customer called about. The second major source of repeat contacts is experience issues, which constitute 24 percent of all repeat contacts. These are primarily emotional triggers that cause a customer to second-guess the answer they were given, or double-check to see if another answer exists.

To mitigate this issue, you should focus on next issue avoidance. Next issue avoidance starts with a totally different mind-set than simply asking a customer if you resolved their issue. Agents are trained and coached to ask themselves, “How can I make sure this customer doesn’t have to call us back?” Simply letting a customer know that you are trying to save them from having to call back later and deal with another related issue goes a remarkably long way.

The fundamental difference when applying this tactic is you are not looking to simply solve the current issue, but also head off the next issue. According to Dixon, “the best companies think of issues as events, not one-offs, and teach their reps to forward-resolve issues that are related to the original issue but typically go unnoticed by the customer until later.”

Experience engineering

The third element of creating an effortless customer experience is “experience engineering.” Experience engineering refers to managing a conversation with carefully selected language designed to improve how the customer interprets what they’re being told. At its core, experience engineering reflects the importance of perception.

Dixon found “that the customer’s perception of the experience actually counts for fully two-thirds of the overall ‘effort equation.’ Put differently, how the customer feels about the interaction matters about twice as much as what they actually have to do during the interaction.” The exertion required from the customer makes up only 34.6 percent of how they evaluate customer effort. But the interpretation side — the softer, more subjective elements based entirely on human emotions and reactions — make up a shocking 65.4 percent of the total impact.

Many customer interactions that do not require a lot of exertion still feel like a lot of effort to customers. Also, most companies have been strategizing about how to reduce customer effort without focusing on the customer’s perception of the experience. Instead, Dixon proposes a three step process to engineer successfully the customer experience so they do not perceive high effort:

  • First, experience engineering is purposeful. It’s about actively guiding the customer and taking control over the interaction through a series of deliberate actions.
  • Second, experience engineering is designed to anticipate the emotional response of the customer. Agents who engage in experience engineering are trying preemptively to offer solutions that the customer will find agreeable.
  • Third, an experience engineering approach is focused on finding a mutually beneficial resolution to customer issues. This means matching the customer’s actual and often unstated needs with what the company can offer.

Following these steps a can significantly change outcomes, by preempting a high-effort interpretation and getting the customer to feel like it was very little effort. Dixon writes that, “reducing the interpretation of effort, particularly in situations where there’s nothing else that can be done to reduce exertion, is the ultimate win-win-win—best for the customer, best for the company, and best for the individual reps who are in the hot seat delivering bad news on a daily basis….Reps need to find a way to both be truthful (because the answer in many cases is, unfortunately, still no), but in a way that doesn’t trigger the negative emotional reaction and all the bad outcomes that come along with it.”

Delivering this bad news can be alleviated with the use of positive language. Dixon writes, “In their first attempt at positive language, many people struggle: ‘Uhhhhh, the park closes whenever the magic stops.’ (No, the park actually closes at 8 p.m.) ‘The park closes whenever you leave.’ (No, if you’re still here at 8: 01, you’ll probably get some Disney version of the bum’s rush.) Ultimately, the most correct answer is some version of, ‘The park remains open right up until 8 p.m. Then we reopen for even more fun tomorrow morning at 9 a.m. Hope you can join us then!’ How could a customer possibly have a negative reaction to that?”

To drive experience engineering, you should start by analyzing your highest-volume incoming customer requests and see how your agents responded if the customer was not going to get what they wanted. You want to train the agent so they act as the customer’s advocate, the person who’s on the customer’s side and doing everything to make it an easy, low-effort experience. An example Dixon uses is a restaurant server being trained to respond to a customer who wants a Coke while the restaurant does not serve it, responding, “I’m happy to get you a Pepsi.”

The salient point is for the agent not to be fast with the saying no. Suggesting an alternative is better than immediately sharing what is not available.

Another element of successful experience engineering is learning more about your customer so you can “engineer” their experience. Rather than putting a customer on hold or asking the customer to wait during a Live Chat while they look for the answers to the customers questions, agents can strategically use those moments when looking at their screen for information as an opportunity to learn something about that customer and their needs that could become useful later in the conversation.

By listening to the customer, agents can categorize them. For example, you may segment your CS requesters into four categories: The Feeler, who leads with their emotional needs. The Entertainer, who loves to talk and show off their personality. The Thinker, who needs to analyze and understand. The Controller, who just wants what they want, when they want it. When a customer feels that the agent they are interacting with understands them, a lower-effort experience is much more likely.

Giving different experiences to different people based on what the agent has learned (or the company has learned previously) moves from the target from consistent service where the customer service management team defines what “good” is and then expects all frontline reps to conform to this standard, to consistently tailored service, where each individual customer is treated individually. Dixon writes, “it cannot be accomplished simply by telling employees what to do in every situation. It is readily apparent that this notion of consistently excellent service will require a serious rethink about how to manage customer service employees.”

Build the Control Quotient

Another element of creating an effortless experience is giving your agents control, what Dixon refers to as the Control Quotient (CQ). Judgment and control differentiate today’s best agents. With increasingly complex live service (phone, Live Chat, real time email, etc) and heightened customer expectations due to simple issues being resolved in self-service, the most important competency for reps to possess is CQ. CQ is the ability to exercise judgment and maintain control in a high-pressure, complex service environment.

Dixon three distinct keys to unlocking CQ that are within the control of customer service leadership to enable:

  1. Trust in agent judgment
  2. Agent understanding and alignment with company goals
  3. A strong agent peer support network

According to Dixon, “these three factors — with all other things being equal — are the difference makers that transform average organizations into world-class low-effort service providers….Frontline reps are made to feel that they are free to do whatever is right to serve that one customer they are interacting with right now.

Having high CQ is necessary to achieve consistently excellent service because you cannot have great service by treating all customers the same. Standardized service cannot be great because all customers are not the same. Customers have different personalities, different needs and different expectations. Their ability to understand and verbalize their problems and issues is also very different. Dixon explains, “when a company mandates that every customer call include all the standard, company-imposed criteria, and takes away the rep’s ability to deal with the customer at a more natural, spontaneous, human level, the interaction is reduced to a mechanical, rote exchange.”

While CQ is the greatest differentiator of rep performance, the reality is that most agents have moderate to high CQ potential. Instead of training effort reduction, you can coach it. Although training is helpful for building awareness, effort reduction involves frontline behavior change that can only be delivered and sustained through effective frontline supervisor coaching.

One issue is that many companies inhibit reps from exercising CQ due to the environment of strict adherence they have created and reinforced historically. Judgment and control are not welcomed in these environments. Dixon suggests you “give control to get control of the front line. To allow reps to activate their latent CQ potential, companies need to demonstrate trust in rep judgment. Approaches include deemphasizing or eliminating handle time and the QA checklist, clarifying reps’ alignment between what they do and what the company is trying to achieve, and allowing reps to tap into the collective experience and knowledge of their peers to make smart decisions.”

Only by coaching and empowering your agents will you reduce the effort your customer experiences when dealing with customer service.

Create the right culture

Another key to improving customer service by creating an effortless experience is creating the appropriate culture. You need to create a clear contrast between old and new behavior. Then explain to your team how and why an effort reduction approach differs from the current service philosophy. Given the power of stories, use a change story to continually reinforce why teams need to focus on effort reduction, what’s at stake, and the nature of support they’ll be provided.

In my experience, transforming your customer service approach cannot be another flavor of the day project. You should not make effort reduction another ask. If you are just adding effort reduction to a long list of requirements, it will signal a lack of commitment and competing priorities. Instead, remove requirements such as handle time or strict QA forms to allow pilot teams truly to focus on reducing customer effort, helping your team determine the right ways to change behavior.

You also must make effort reduction easy. Dixon writes, “asking reps to ‘go out and reduce effort’ without a clear sense of where and how will surely be met with failure and confusion.” Instead, start with a pilot with clear tactics and goals. This may include forward resolving a specific type of service issue, or using positive language techniques for a small number of common issues. Finally, provide heightened support and coaching, as pilot teams get comfortable with these approaches.

Purchase experience

The final element of creating an effortless experience is around the purchase journey. Dixon’s research showed that reducing customer effort in pre-and post-sales customer touchpoints has a strong impact on loyalty. The ease that customers can learn about products or services, make a purchase, and obtain post-sales service and support provides a dramatic opportunity for brand differentiation.

An effortless experience is the recipe for increasing LTV

Reorienting your customer service from creating great experiences and cool stories to reducing customers’ effort feels counter-intuitive and is not easy but the data is impossible to argue with. Your goal should not be to have a customer service experience you can feel good about but one that improves loyalty, and the effortless experience will have the biggest impact on loyalty (and thus LTV).

Key takeaways

  • Data shows that trying to create an exceptional customer experience has virtually no impact on loyalty and engagement, however, reducing the effort the customer must exert does improve loyalty,
  • The best way to measure this effort is CES score, which is based on a statement, “the company made it easy for me to handle my issue,” after which the customer is asked to answer (on a 1–7) whether they agree or disagree with that statement.
  • The keys to implementing successfully an effortless experience program are minimizing channel switching, avoiding repeat contacts, engineering the customer service interaction experience, building the control quotient, creating the right culture and optimizing the purchase experience.

Share this:

  • Facebook
  • LinkedIn

Like this:

Like Loading...
Author Lloyd MelnickPosted on September 23, 2020August 30, 2020Categories General Social Games Business, General Tech BusinessTags customer service, Effortless Experience1 Comment on How to manage your customer service to have the biggest impact on your profits

Interview with the Queen of Research, Maria Cipollone

Several weeks ago, I wrote a blog post about the perils of market research and surveys. A former colleague and one of the smartest researchers I ever met, Maria Cipollone, highlighted that my post focused on the risks of (bad) research while there are some fantastic opportunities to apply user research to make better games. This argument resonated with me as I completely agree that more data is better than less, and integrating qualitative and quantitative information will lead to better products.

maria

Given the success Maria and I enjoyed working together in the past, I invited her to discuss when and how to use different research tools. Below is our conversation:

Lloyd: Hi Maria, thanks for joining me today. Let’s start by discussing my recent blog posts, where I highlighted some of the challenges with market research and you accurately pointed out some issues with my rationale. I think at the core I was somewhat dismissive of surveys, and equated them with market research, but there is so much more that can be done to understand customers.

Maria: So, surveys often get misused because they’re usually one of the few research tools available to organizations. They pervade our culture; every consumer is used to getting a feedback survey. However, they’re really only good at measuring the following: (1) someone’s attitude about a brand/product (2) someone’s perception of their own experience (e.g., was it negative or positive) (3) their perception of an experience based on “word of mouth” or public opinion (e.g., WOMI or NPS).

Many people use surveys to: (1) Try and predict what a consumer/player will do in the future (2) Have consumers/players report on their behavior in the past. This is where surveys go wrong. If you are asking your consumer/player/user to self report the frequency of their behavior, or predict the likelihood that they’ll do something in the future, then you’re collecting bad data (spurious).

Lloyd: What would be a good use case then in your first scenario, using what it is good for? What would be a use case where a customer’s perception could provide actionable insights?

Maria: NPS surveys are good for finding out brand perception and the impact of experience on brand perception; if measured correctly. You would measure the score, but more importantly you would collect open-ended feedback on why that is so. For example, let’s say you have a casino game that plays ads every time the player wins. The player gives an NPS score of 2 (detractor) and says, I hate how you interrupt my winning celebration with an ad. You might move the ads to when the player loses, and re-survey the same cohort to see if it impacts NPS more positively.

I have an even better example. In product research, we use a survey called a Kano analysis; where we ask customers to identify which features are important to them in a product. In the example of the Casino game, do players want to give gifts? Do they want credits for watching ads?

This Kano analysis allows us, via survey, to look at the relative importance of features, according to player perception. That way, we can see which features are perceived as “must have” or “delighters”
Then, dev and engineering can concentrate their efforts on those features

Lloyd: How can you tell if the player actually knows. They might think they want gifts but actually want credits?

Maria:  We can trust what the player knows for two reasons:

  1. We want to acquire them; so gifts will get them in the door
  2. We look at mean scores across the board; we consider the aggregate; not just one perception,/li>

But your question brings up another issue I’d like to address, market research v user research. If you’re trying to develop a product; or features you need user research to implicitly observe what users “want”.

In the example of the gifts/credits; if you suspect that users are saying one thing but wanting another — that’s a problem space issue. You need to observe qualitatively the use of the product as realistically as possible. This can done with rapid ethnography; user interviews; contextual interview, etc.). This is user research — where we observe the use of the product in its natural habitat to balance what users say against what they actually do. On top of this, you need metrics to measure against all of this.

Market research measures what consumers will buy; and what attitude about the purchase.

User research measures/observes what players will do, and what tools are needed to accomplish that (UX/UI design).

Lloyd: Makes sense but before delving further into user research (which to me is the biggest opportunity), one last question about surveys/market research. Some of the feedback I got from my post was that surveys largely fail due to being worded poorly. My gut reaction to that is sort of my reaction to the argument that socialism has only failed because it is has never been implemented as intended. Is it utopian to expect your surveys (or election polls) to be designed in a non-leading way (unless you have an awesome researcher, which not everyone has) to be designed well or are most surveys going to be flawed and a reason to use other research tools.

Maria: Haha about socialism; that’s less of a linguistic problem and more of a human problem. But I definitely have a response.

Surveys can fail for a lot of reasons: survey design is definitely one of them. You can pick up biases, because people tend to agree, but there are simple ways to get around that (e.g., always write scales from negative— positive and stay away from agreement).

But they can also fail if:

  1. You are asking people to self report on behavior or predict the future
  2. You have a sample that is not representative of the population you want to infer from

There are many reasons surveys fail, and biases are popular, but I think they fail because of #1, most
people use them in the wrong way. It’s like trying to use an X-Ray to look at muscle tissue. You need an MRI for that.

Lloyd: Thanks, I’d agree with that analysis. Now onto the really interesting thoughts related to user research. While everyone is familiar with surveys, can you discuss further what are the different user research tools and options? You touched on it above about observing customers in their natural habitat (sort of like Blue Planet) and I remember that was one of your super-powers when we worked together, the value of going to a player’s house and watching them experience the game. Can you delve deeper into how that is done and other types of user research maybe even some rough rank or what should be used when?

Maria: Of course– and thanks for the opportunity. I’ll list some popular methods and what they’re good for, then I’ll discuss the future of UX research as I see it; it’s certainly changing 🙂

Lloyd: Perfect

Maria:

  • 1.) Usability research (in-lab): This is just a method. A lot of times, people unfamiliar with UX Research will say; “We need usability”. Here’s where you need usability:You have a prototype that is designed to solve a specific set of problems (that you’re sure are problems). Usability testing (with task analysis or think-aloud) is to EVALUATE that prototype.
  • 2.) User interviews in Context: This is a very specific method where you “interview” the user, but really you’re guiding them very carefully through an experience to implicitly observe what problems, “pain points” and inefficiencies they have with a system.In this scenario, a researcher does very little talking. They want to reduce the “Babble ratio” here and really let the user demonstrate how they naturally use the system or game.This is when you are trying to uncover problems for your current product, or even a new product to solve. Good products solve problems– not just occupy a market space. Too often, at least in tech, we introduce a technology with no problem. (Look at Voice Assistants for a good example of no problem space).

Lloyd: I can think of a few examples of tech that doesn’t solve a problem and then fails, a few billion later 🙂 (maybe Samsung’s new foldable phone).

Maria: You could optimally do #2 as Rapid ethnography, where you go in a home/office/environment to conduct the interviews.

Lloyd: What’s the benefit of going into a home than doing it in a lab?

Maria: Home/Environment is always the optimal scenario because that’s how it gets used in the real world. For example, in slots; it’s important to have visual and haptic feedback, because players often sit back with the game.

The more feedback the UX gives them; the more likely they are to turn their attention from the TV or whatever is distracting them back to the game. But knowing the environment that the game/product gets used in is really important to UX design.

Lloyd:Pardon my ignorance, but “haptic feedback”?

Maria: My fault. Haptic feedback is a vibration or buzz that the product emits; like when a game controller rumbles or a phone vibrates.

Visual feedback would be fireworks; flashes, etc

Lloyd: And they would get a difference experience in a lab (or your office) than in their natural environment?

Maria: In the lab, it’s superficial. The player is paying sole attention to the game because I’m watching the player, and they want to do a good job (performance bias). In the home, I get the real deal; the dog is barking; the TV is on– I know that their attention gets pulled away from the game. And part of my actionable insight can be to redirect their attention to the game via visual/haptic/sound feedback (although most players play w sound off).

But, if you’re short on money, there’s ways to interview users in context–even remotely; with tools like UserTesting.com

Lloyd: And that was going to be my next question, where do those tools fit in?

Maria: From what I said, i would take it as watch the user experience the product, if possible, in their home (if it’s a game normally played at home), then using a tool like UserTesting.com and last in your office/lab.

Lloyd:Other than observing the player, are there other user research techniques or tools worth incorporating into your playbook?

Maria: Yes– I actually like to do quantitative UX research– methods include surveys, variant (AB) testing, and behavior modeling.

Lloyd: Would you mind elaborating on each?

Maria: I use surveys to benchmark experience. For example, before we do major game OS updates, I might send out a survey and ask players to rate how perceptively slow the game seems.
Then, I would follow up after the OS update to see if perception shifted, and the game seems faster; more colorful.

Here, perception is important — because a happy player is likely to give a good app store rating for a game that seems fast; bright; fun.

You can als measure the psychological impact of your OS improvements.

AB testing we use to measure small changes– copy changes (e.g., changing the word “Buy Now” to “Purchase Coins” and see if that creates an upsell/lift)

Behavior modeling we do to sharpen metrics and KPIs

What do we know from user interviews that might help us understand player engagement metrics?

How can we develop hypotheses from the interviews that we can model in the metrics. For example, we observe that users quit when they are asked to purchase every time they run out of coins. So, we look at our engagement metrics, and see if we can link a drop in session length related to the purchase flow timing. Maybe we switch it up; floating the purchase flow after a win, or at the open of the app.

I think this is the future of UX research. That we will use observed insight — from small sample to understand our users at metrical scale. The future of UX research is assisting data science; machine-learning practitioners understand what they’re seeing at scale. Eventually, this will help build algorithms

Lloyd: That’s really an interesting approach.

One last question around two buzz phrases: focus groups and personas. I have strong thoughts on both but do not want to create a leading question.

Maria:Sure!

Focus groups are misused like surveys are. I’m not a practitioner. I guess the nicest thing I can say is if you’re looking for group speak on about the conception of a brand or experience, then go ahead. I find that product developers like them because they make them feel better. It’s a bunch of enthusiasts; speaking enthusiasm.

Lloyd: I’ve seen that

Maria: But, it can be useful for some; I think they can be very misleading. Like a COVID test you can do cheap and quickly but has a high false positive rate. I’m not a good representative of the method haha.

Lloyd: It may be confirmation bias but it is what I was hoping you’d say.

Maria: I’ve been forced to do focus groups, mostly to satisfy power.

Personas can be very valuable, but you need to do a few things to get them right.

Lloyd: Tell me more

Maria:

  • (1) Are you talking about customers (people who use your products) or consumers (people who you want to use your product)? I find the latter to be more valuable
  • (2) You can develop personas by doing interviews in the method I described above. If you’re doing personas of your current customers, you should follow up with a segmentation, which means you take quantitative measures to back up the persona you’ve made.

Persona is a framework for design. A segment is a portion of an audience.

Lloyd: What’s the value of personas over just segmentation?

Maria: Personas help designers inhabit the psychology of the player/user so they can make a better user experience.

Segments help product people go after an audience or understand the share of the audience the persona occupies.

Personas are for designers.

Segments are for the rest of us.

Lloyd: My concern with personas is are we over-simplifying (i.e. stereotyping) players? It sounds like the profiles that authorities sometimes abuse?

Maria: Ah, that’s a good concern — and I’ve had many conversations about the implicit biases that get produced in persona. If you ground them in qualitative research (e.g., 10-20 interviews); where you make sure your sample is diverse, you can protect against that issue. And, you must always preach that people shift in and out of persona. For example, I’m a casual gamer with slots; but not poker!

Lloyd: Thanks so much Maria, I found our chat really enlightening. Before we end, anything you want to add?

Maria: Research is worth its weight in gold! Let’s do it again AND THANK YOU!

Share this:

  • Facebook
  • LinkedIn

Like this:

Like Loading...
Author Lloyd MelnickPosted on September 16, 2020September 22, 2020Categories General Social Games Business, General Tech Business, Lloyd's favorite postsTags market research, user research3 Comments on Interview with the Queen of Research, Maria Cipollone

People Analytics for Online Gaming

People Analytics for Online Gaming

Last month, I wrote about some applications online gaming companies can take from the world of operations analytics, which are primarily used by traditional and retail businesses, and a course on People Analytics from Wharton showed some ways this area of analytics could be used to improve our businesses. While people analytics is often the domain of HR professionals, there are valuable elements for managers across tech businesses (many of whom do not have robust HR teams). Below are some of the most important takeaways from the course.

Identifying the noise and improving performance evaluations

A critical role for any leader or manager is accurately evaluating performance of your employees. Accuracy is important to ensure you provide useful feedback that helps people improve, assists you in putting the right people in the correct roles and identifies the skills needed for success in specific functions.

The fundamental challenge in performance evaluation is that performance measures are very noisy. There is a range of outcomes possible outside of the employee’s control. The challenge is separating skill and effort from luck so that you understand true performance.

In the course, the instructors highlight how often people confuse skill with luck. They start with an example from sports, showing that professional American football teams ability to draft (select out of university) players is almost primarily luck. While some teams have had a string of success, success in one year has no predictive ability on success in future years. If skill were a key factor, then you would expect a team to repeat its success.

It also holds true with investment analysts. An analyst who has a great year is no more likely to have above market results the next year than one of the poorest performing analysts.

There are many reasons we confuse this luck with skill:

  • Interdependence. I have found a humbling amount of work depends on other people, if they are great we look great, if they are not, we look bad. You should not attribute individual performance to something that is at the group level. In these cases, performance should be evaluated as a group. Conversely, reliable individual evaluation requires seeing people on other teams (for example, Tom Brady’s play on the Buccaneers will help assess whether his performance was due to him or the environment).
  • Outcome bias. We tend to believe good things happen to those who work hard and judge by outcome, not by process.
  • Reverse causality. When we see two correlated factors, we tend to believe one caused the other. In reality, one there may be no causality or it may be in the other direction. This leads us to see things that do not exist and can prompt us to give people undeserved credit or blame. One example cited in the course was research that showed charisma did not impact whether a CEO was successful, but successful leaders were considered more charismatic.
  • Narrative seeking. We want to make sense of the world and tell a causal story.
  • Hindsight bias. Once we have seen something occur, it is hard to anticipate we did not see it coming. We rewrite the history in our minds the history of the process.
  • Context. We tend to neglect context when evaluating performance. We over attribute performance to personal skills and under attribute it to environmental factors such as the challenge of the problem the employee faced, quality of their team, etc. In psychology, this issue is referred to as the Fundamental Attribution Error, blaming or crediting personality traits to situational traits.
  • Self-fulfilling prophesies. People tend toward performing consistent with expectations. High expectations increase performance, low expectations decrease performance
  • Small samples. Small samples lead to greater variation, what we see in a small sample may not be representative of a large population.
  • Omitted variable bias. There could be an additional reason that is driving both what the performance and what we think is causing the performance. For example, we may think higher compensation is leading to better performance. The truth might be that extra effort is causing both higher compensation and superior performance, thus the key variable (effort) had been omitted.

When you are looking at evaluating performance, there are several tools to improve your accuracy. You need to focus on the process the employee (or potential employee) took rather than only the outcome; we normally omit almost 50 percent of the objectives that we later identify as relevant to success. Thus, you should look at a much broader set of objectives that impact the business. This process includes determining what increases the likelihood for superior performance, rather than traditional outcomes are there four or five things that may not be obvious but contribute to overall success. A few years ago, I wrote how one basketball player (Shane Battier) was much more valuable than many players who scored more points or otherwise had flashier statistics, the same holds true in traditional business.

You need to look carefully at the job and understand what drives success. Define success not only by outcomes but how well these factors predict other KPIs, attrition, rate of promotion, etc. In the course, they also point out what works for one role or company does not necessarily work for others. Google found that GPA was an awful predictor of performance, but for Goldman Sachs it is the gold standard of who will be successful.

Slide1

Additional ways to improve performance evaluation include:

  • Broaden the sample. Add additional opinions, more performance metrics, different projects and assignments. The key is to use diverse, uncorrelated signals.
  • Find and create exogenous variation. The only truly valid way to tease out causation is to control an employee’s environment. Have the employee change teams, direct report, projects, offices as the variation will provide a better sense of the employee’s ability.
  • Reward in proportion to the signal. Match the duration and complexity of rewards to the duration and complexity of past accomplishments. For short, noisy signals it is better to give bonuses and praise rather than raises and promotions.
  • The wisdom of crowds. Average of guesses is surprisingly good (even the exercises like guessing the number of jelly beans in a bowl), so get multiple experts to help with your assessment. Ensure, though, that their predictions are independent of each other (they are not talking to each other, they do not have the same background, etc).
  • Ensure statistical significance. A small sample (one project, one season, etc) is less likely to give you an accurate measure.
  • Use multivariate regression. This analysis will allow you to separate out the influence of different characteristics.

At the end of the day, you need to separate the signal from the noise to evaluate current performance and predict future success. Someone may have had a great performance or year but they may be a less valuable future employee than someone else because of luck or other environmental factors.

Recruiting the right people

Evaluating performance is not only important for your current team but also recruiting the best new hires. Hiring the wrong person can have huge consequences, including missed growth opportunities, damaging your culture and decreased output. Yet, most companies find consistently recruiting the right people difficult. This is often caused by the Illusion of Validity, that we think we know more about people than we actually do. We interview somebody and believe we can judge his or her suitability for a job. This Illusion is popped by research that shows the correlation of several hiring tools to subsequent performance (Ranked from most effective to worst:

  1. Work samples.
  2. Cognitive ability tests (these are general intelligence tests).
  3. Structured interviews.
  4. Job knowledge tests.
  5. Integrity tests.
  6. Unstructured interviews.
  7. Personality tests.
  8. Reference checks.

Several of the low scoring tools reinforce the Illusion of Validity. Unstructured interviews, where you meet someone and get a sense of their strengths and weaknesses, is often the paramount driver for whether we hire a candidate, but we are not good judges of character. I remember reading when President Bush first met Russian President Putin in 2001, he said “I looked the man in the eye. I found him to be very straight forward and trustworthy.” We see how well that worked out. As the above research also shows, reference checks are even more ineffective in the hiring process for similar reasons.

What does work is seeing examples of their previous relevant work, intelligence tests and structured interviews. Structured interviews are one designed to assess specific attributes of a candidate.

Use analysis for internal promotions

As well as improving the hiring process, People Analytics can help move the right people internally into the right roles. Often, people are promoted based on having done a great job in their current role. The course shows, though, that this approach often leads to negative outcomes (both for the employee and the company). The skills needed to succeed in the next job may not be the same skills that led to success in the current job. Performance in one job is not automatically a predictor of performance in a new role.

Just as it is important to understand the key predictors of success when recruiting, you need to do the same with internal promotion. Understand what leads to success in the new role and hire internally (or externally) those most likely to succeed. The good news is that research has shown that people promoted performed better overall than new hires into comparable roles.

Reducing employee churn

Attrition is one of the costliest problems company’s face and People Analytics can help combat this problem. The expense of losing an employee includes hiring a replacement, training costs, loss of critical knowledge and the impact on customer relationships. People analytics offers help in mitigating this problem. You should start by analyzing the percent turnover at specific milestones (3 months, 6 months, 1 year, etc.) and evolve into using multivariate regressions to predict who will reach each milestone. As you get more sophisticated you can build a survival model to understand over time what proportion will stay with your company. And then finally look at a survival/hazard rate model to test what factors accelerate the risk of exit.

During the course, they also provided some interesting data on why people leave. The decision to quit is most commonly driven by external factors, comparing the current job to a new opportunity. This understanding is critical as internal factors do play a role, internal issues still have a relatively small relationship to how likely people are to churn.

To reduce churn over time, the instructors of the course suggest an informed hiring strategy (where predicting churn is integrating into who is hired) and target interventions (reduce factors that accelerate risk of exit, address unmet needs, focus retention efforts, etc).

Using network analysis to improve collaboration

Another great takeaway from the course was how to use network analysis to understand, improve and incentive collaboration. Without getting too granular, network analysis involves looking at the informal links between employees, who gets information from who and what direction(s) that information is flowing. Once you draw that map, you can understand who are central to communications, who are outside the map, areas for improvement and people who should be rewarded for their role in collaboration.

network map

While there are many details to creating and analyzing a network, there are five key areas to focus on when looking at individuals (there are no right and wrong answers for each attribute, optimizing depends on the goal and environment):

  1. Network size. How many people are they connected to.
  2. Network strength. How important and often are the lines of communication.
  3. Network range. How many different groups are they connected to. Range would be small if you are connected to everyone on your team even if it is a big team, large if you are connected to one person at every other corporate function (i.e. marketing, accounting, analytics, etc.)
  4. Network density. Are the connections connected to different people or to each other.
  5. Network centrality. Is everyone equally central or are there some in the middle and others on the fringes.

Understand how your company’s network works will allow you to understand collaboration patterns. For example, by deconstructing performance, you can understand if collaboration patterns impact performance. If there is a positive causal relationship, you can work to replicate or improve these relationships. If there is no relationship, your team might be wasting time on unnecessary collaboration.

You can use this analysis to understand if collaboration is needed and where. Then you can strategically build ties and bridges between different parts of the organization. This result can be achieved with:

  • Cross-functional meetings.
  • Conference calls or video conferences
  • Job rotations
  • Site visits
  • Events

You should also identify where collaboration is unnecessary or overly burdensome and reduce demands on people. Match overloaded people with well-regarded employees who are under-utilized, who can relieve some of the burden. Also identify a small number of new connections that would have the biggest positive impact on team connectivity and shift responsibilities more evenly across members.

Tying performance evaluation with collaboration

People analytics can be particularly helpful connecting the performance evaluation methods discussed above with analysis of collaboration. As I wrote earlier, the key to good performance reviews is understanding what drives the outcomes you are looking for. If collaboration is one of those success drivers, you need to evaluate it thoroughly and incorporate into performance reviews and internal promotions (you do not want to promote someone weak at collaboration into a role where it is vital to success).

You should revise your evaluation systems to include collaboration. First, this will provide incentive to employees to build and use meaningful relationships. Second, it will recognize team members who help others win new clients or serve current customers, even if those direct results accrue to someone else (the basketball player who passes the ball rather than dunks).

To achieve this goal, you need to have the right measures. If you are assessing individual collaboration, you need to look at elements the individual controls. You then need to make sure there is reliability, which are the assessments will remain consistent over time and across raters. Third, the measures must have validity (accuracy). There also needs to be comparability, you need to be able to use the measures to look at all people who you are evaluating. Finally, it must be cost effective, it should not be too expensive to collect the information.

Key takeaways

  • You need to align performance evaluations with the underlying factors that create success; deconstruct what leads to the outcomes you want and then assess people on those factors.
  • Some common problems when evaluating people include context (attributing results to a person when the environment drove success or failure), interdependence (assessing on an individual level a result that was driven by a team), self-fulfilling prophecies (people perform consistent with expectations) and reverse causality (we attribute causality to correlation, even though the factors may not be related or may be in the other direction).
  • You should assess how your team or company works as a network, looking at the relationships, and then encourage and grow ones that lead to desired outcomes.

Share this:

  • Facebook
  • LinkedIn

Like this:

Like Loading...
Author Lloyd MelnickPosted on September 9, 2020June 17, 2020Categories Analytics, General Social Games Business, General Tech BusinessTags bias, collaboration, interdependence, network analysis, People Analytics, performance evaluation, recruiting2 Comments on People Analytics for Online Gaming

Summer highlights from The Business of Social Games and Casino

Summer highlights from The Business of Social Games and Casino

Normally, I take the summer off from writing blog posts but 2020 is anything but a normal year. Unfortunately for many, the pandemic meant you had to cancel or postpone your holidays or had more time to add to your knowledge. Thus, I continued to post but at a slightly reduced schedule. If you were lucky enough to get away, below is a summary of my posts over the summer that you may have missed, enjoy.

Lifetime Value Part 29: Increasing Retention

Key Takeaways

  • Retention is the strongest driver of LTV and data from Google shows the most important retention KPI is the amount of players who return on day 2 after installing your game.
  • The strongest driver of D2 retention is how many minutes your customers stay/play within the first ten minutes of starting the app.
  • To improve retention between the first and second days, make the early experience faster and more fun by improving load times (while reducing secondary loading), making your lobby intuitive, and not distracting your player with a bad tutorial or promotions.

Behavioral Economics Tips for Gaming Companies

Key Takeaways

  • A key lesson of behavioral economics is that less choice often drives better results. When the number of choices increases, our ability to make a decision decreases.
  • Consumers hate uncertainty. Questions without answers cause fear and kills the experience and sales, it is a customer experience killer.
  • Use AB and multi-armed bandit tests help you understand how your players will react in the context of your game, market research conversely might provide bad information as people do not know what they want.

How to avoid meetings about the trivial, aka bikeshedding

Key Takeaways

  • Bikeshedding is the tendency we have to spend excessive time on trivial matters in meetings, often glossing over important ones.
  • Bikeshedding is damaging because it wastes very valuable time and, more importantly, leads to insufficient discussion of important issues.
  • To avoid bikeshedding, set a clear purpose for all meetings (and eliminate conversations about other issues), only invite necessary people, appoint a decision maker and have the decision maker set clear parameters for the meeting.

Lifetime Value Part 30: Why clumpiness should be one of the KPIs you focus on

Key Takeaways

  • We normally focus on analyzing recency, frequency and monetization of the customer but by adding a new KPI, clumpiness, we get a much better understanding of their expected value.
  • Clumpiness refers to the fact that people buy in bursts and that those customers could be extremely valuable.
  • Clumpiness can help you better segment players, predict VIPs and target your reactivation efforts and spend.

Why Evo’s $2 billion+ acquisition of NetEnt is more important (to both iGaming and social casino) than you think

Key Takeaways

  • Evolution Gaming, the largest Live Dealer provider, recently announced a bid to acquire NetEnt, the largest slots provider for $2+ billion.
  • The deal, in that Evo is the acquirer, shows that Live Dealer is eclipsing slots in the casino ecosystem.
  • For real money operators, they need to ensure they balance resources around both Live Dealer and slots while social casino companies need to figure out the best way to embrace this opportunity.

Customer analytics tips for gaming companies

Key Takeaways

  • People who wander around a retail location spend more than ones who immediately find what they are looking for and retailers optimize to create this jiggliness. Online casinos and games can also build in jiggliness so players find new games and offerings rather than simply quickly go to the one they are looking to play.
  • While satisfaction with customer service positively impacts profitability, the relationship is not linear. Improvements have a strong impact when players are highly dissatisfied (and that is corrected) or when customers with great service make further improvements, companies in the middle often do not see a positive ROI on CS improvements.
  • A relationship between two variables does not show one is causing the other, to have causation there must be a relationship plus temporal antecedence plus the absence of a third variable driving both factors.

How to get your big initiatives done

Key Takeaways

  • Many important initiatives, from new products to operational efficiency, bog down and die in the middle phase. They initially have momentum but stall once the initial burst dies down.
  • To MOVE projects through this middle phase, the Middle element needs a clear and concrete strategy and you need an Organizational structure with capacity to complete the initiative.
  • The final keys to getting through the middle phase are Valor, making tough decisions and prioritizing the initiative, and getting Everyone involved.

How Operations Analytics can help online gaming companies

Key Takeaways

  • While Operational Analytics are a focus primarily in retail and traditional businesses, there are many best practices that iGaming and social game companies can leverage.
  • Forecasting is central to generating and earmarking resources but is often a challenge for game companies, rather than trying to create a point forecast create a range based on moving averages and looking at standard deviation. For new products, create a simulation that will show the distribution of potential outcomes and the risks and rewards possible.
  • You need one, and only one, distinct goal and then optimize your strategy around that goal; it’s impossible to optimize for multiple goals. Use constraints to incorporate what used to be additional goals.

The risks of market research

Key Takeaways

  • A sole reliance on customer input and feedback, traditional market research, is built on a model of human decision making that assumes humans are rational, while in practice we are not.
  • Not only do people provide a response inconsistent with their actions, they often do not understand the underlying causes of their behavior.
  • Use one or multiple tools that show actual decision making, such as ABn testing or looking at reactions to similar initiatives in adjacent industries, rather than relying on what customers believe is their preference.

Share this:

  • Facebook
  • LinkedIn

Like this:

Like Loading...
Author Lloyd MelnickPosted on September 2, 2020August 29, 2020Categories Analytics, behavioral economics, General Social Games Business, LTV, Social CasinoTags behavioral economics, LTVLeave a comment on Summer highlights from The Business of Social Games and Casino

Get my book on LTV

The definitive book on customer lifetime value, Understanding the Predictable, is now available in both print and Kindle formats on Amazon.

Understanding the Predictable delves into the world of Customer Lifetime Value (LTV), a metric that shows how much each customer is worth to your business. By understanding this metric, you can predict how changes to your product will impact the value of each customer. You will also learn how to apply this simple yet powerful method of predictive analytics to optimize your marketing and user acquisition.

For more information, click here

Follow The Business of Social Games and Casino on WordPress.com

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,323 other followers

Most Recent Posts

  • Podcasts now available
  • Lessons for gaming and tech companies from the Peter Drucker Forum
  • Chaos Theory, the Butterfly Effect, and Gaming
  • How to give help without micromanaging

Lloyd Melnick

This is Lloyd Melnick’s personal blog.  All views and opinions expressed on this website are mine alone and do not represent those of people, institutions or organizations that I may or may not be associated with in professional or personal capacity.

I am a serial builder of businesses (senior leadership on three exits worth over $700 million), successful in big (Disney, Stars Group/PokerStars, Zynga) and small companies (Merscom, Spooky Cool Labs) with over 20 years experience in the gaming and casino space.  Currently, I am on the Board of Directors of Murka and GM of VGW’s Chumba Casino

Topic Areas

  • Analytics (114)
  • Bayes' Theorem (8)
  • behavioral economics (8)
  • blue ocean strategy (14)
  • Crowdfunding (4)
  • General Social Games Business (457)
  • General Tech Business (194)
  • Growth (88)
  • International Issues with Social Games (50)
  • Lloyd's favorite posts (101)
  • LTV (54)
  • Machine Learning (10)
  • Mobile Platforms (37)
  • Social Casino (51)
  • Social Games Marketing (104)
  • thinking fast and slow (5)
  • Uncategorized (32)

Social

  • View CasualGame’s profile on Facebook
  • View @lloydmelnick’s profile on Twitter
  • View lloydmelnick’s profile on LinkedIn

RSS

RSS Feed RSS - Posts

RSS Feed RSS - Comments

Categories

  • Analytics (114)
  • Bayes' Theorem (8)
  • behavioral economics (8)
  • blue ocean strategy (14)
  • Crowdfunding (4)
  • General Social Games Business (457)
  • General Tech Business (194)
  • Growth (88)
  • International Issues with Social Games (50)
  • Lloyd's favorite posts (101)
  • LTV (54)
  • Machine Learning (10)
  • Mobile Platforms (37)
  • Social Casino (51)
  • Social Games Marketing (104)
  • thinking fast and slow (5)
  • Uncategorized (32)

Archives

  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • November 2019
  • October 2019
  • September 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • December 2010
September 2020
S M T W T F S
 12345
6789101112
13141516171819
20212223242526
27282930  
« Aug   Oct »

by Lloyd Melnick

All posts by Lloyd Melnick unless specified otherwise
The Business of Social Games and Casino Website Powered by WordPress.com.
Cancel

 
Loading Comments...
Comment
    ×
    %d bloggers like this: