Chapter 1. Fraudster Traits

That voodoo that you do so well…

Cole Porter1

Aristotle said that something should be described as “good” if it fulfills its purpose. A good knife, for example, would be one that cuts well. With all due respect to the ancient philosopher, we’re going to completely break with this way of looking at things.

It’s common in fraud fighting to talk about a bad card, bad IP, bad address, and so on, and it’s an easy way to communicate. Everyone knows what you mean: you’re saying a card (or IP or address) has been linked in the past to fraudulent activity. But cards, IPs, addresses, and so on aren’t really bad, and talking about them as though they are can be confusing in the long run and may lead to unnecessary friction or false declines for good customers. The real user may still be using their credit card legitimately, even if it’s also being used fraudulently. That IP may be public, or it could be that it’s now being used by someone new. An address might have many people associated with it, most of them legitimate, or a good citizen might have moved to an address recently vacated by a fraudster.

As fraud fighters, what we’re interested in is the identity behind a transaction or an action taken online. Is this identity a real one? Is the person using it the one to whom it belongs? That’s the question at the center of almost all the key questions fraud fighters face, and it’s the context behind looking into impersonation techniques, deception techniques, card and account testing, and so on, informing all the elements we’ll be exploring in this chapter (and in most of the rest of the book as well). Fraudsters try to blend into the background successfully (much like the moth on this book’s front cover) by attempting to look plausible in a variety of ways.

In this chapter, we’ll look at some of the most common traits shared by fraudsters and fraud attacks, regardless of their chosen target industry or preferred attack method. Some of these traits will be covered in greater detail in dedicated sections later in the book but are included here to set the scene, to get all readers on the same page at the start, and to establish clarity about definitions since there are some terms and distinctions, such as abuse versus fraud, which are used in different ways by different companies.

As we go through different techniques and traits, bear this one thing in mind: at the end of the day, you don’t care about what (the IP, address, email, etc.). You care about who.

Impersonation Techniques

When a fraudster is trying to steal from your company, there are three likely scenarios. In the first scenario, they may pretend to be someone else, using a legitimate identity as cover for their fraud and to make their payment method look plausible. In the second, they may try to appear completely fresh, using a fake or synthetic identity, in which case obfuscation is important. In the third scenario, they may be a so-called friendly fraudster using their own identity and planning to file a fraudulent chargeback.

Impersonation techniques are the bread and butter of fraudsters engaged in the first scenario. In general, the impersonation is built around the payment method. The reason they want to craft the rest of their apparent identity to look like someone else is to convince you or your system that they are the real owner of the card, electronic wallet, or other payment method. The stolen payment information is usually purchased, together with details about the victim’s name and address and often with the victim’s email address and perhaps phone number, so that the fraudster has a good base on which to build their impersonation convincingly. The address gives both billing and shipping details, and also provides assistance with IP.


Device ID and behavioral information are far harder to spoof than other elements of a person’s online presence, unless the attack is being carried out not with bought data but with malware being used to skim the information from live visitors. But since people do legitimately use multiple devices and behavioral information varies depending on circumstances, these signals are often overlooked. Though they can be helpful in confirming a good user, the absence of a known device or common behavior is not enough to pinpoint a fraudster, and relying on them in this way would cause false positives to skyrocket, with high numbers of good customers being mistakenly rejected. For accuracy, these signs must be combined with other signals from the user which can help piece together the story to show whether the user is legitimate or fraudulent.

Where emails are concerned, fraudsters will either take over the victim’s account if they’ve stolen their password and other information, or create a new email address that appears to match the victim’s real one, their name, or known facts about them. Some fraudsters will incur the risk of using the victim’s real phone number, if known, because even though this leaves open a risk that the site will call the number and discover the trick, in practice that rarely happens. Others simply use disposable SIM cards that are a match for wherever the victim lives, whereas still others rely on Voice over IP (VoIP)-based phone numbers (though such numbers, which can sometimes be identified, can prove to be a weakness for the fraudster and thus an opportunity for a fraud prevention team).

For physical orders, the shipping address is typically the most difficult part of the impersonation. The fraudster can risk using the real address if they feel confident they or an accomplice will be able to carry out a bit of porch piracy, but this certainly leaves open the possibility that the victim will in fact receive the package, leaving the fraudster with nothing after all their hard criminal work. More commonly, fraudsters using the victim’s real address to foil detection will try to call customer support after the order has gone through to change the address. (For more on shipping address manipulation, see Chapter 7.) The click-and-collect option, in which customers can buy online and then pick up the goods in the store or at a designated pickup point, which has become popular as a result of the COVID-19 pandemic, is another way of getting around the address challenge (and is covered in Chapter 8).

There are also more involved alternatives, notably address verification service (AVS) spoofing, which is a good example of why fraud teams can’t rely too much on even the most commonly used tools. The purpose of AVS is to check that the address being given as part of the transaction matches the address the bank has on file for that card. It does not check whether that address really exists and does not provide the merchant with protection in case of a chargeback. It’s also limited: AVS spoofing relies on the fact that AVS, which is used by many fraud teams to verify addresses in the countries in which it works, only checks the numbers in the address, not the letters. So, a fraudster could trick a system into thinking that 10 Main Street, zip code 12345, was a match for 10 Elm Avenue, zip code 12345. On Main Street, the fraudster would need to have a presence of their own—an office space, PO box, or residence—or they could use a mule (an associate assisting them with their activities, in this case by providing a safe delivery address and, often, reshipping services) with an appropriate address.

Here’s an example to illustrate this point:

Address on file:
10 Elm Avenue, Emeryville, zip code 12345
Possible addresses to fool AVS systems:

10 Main Street, Oakland, zip code 12345 → FULL MATCH

10 I-AM-A-FRAUDSTER Street, MOONVILLE, zip code 12345 → FULL MATCH

1 Elm Avenue, Emeryville, zip code 12345 → ZIP MATCH (a partial match, like this, is often enough to satisfy a fraud prevention system)

In more subtle cases, fraudsters sometimes play with the fact that some towns share the same zip code but may each have a Main Street (for example). Even manual review may fail to pick this up at first glance.

Another option for evasion is muling. Mules have been a fraudster staple for years and have grown in popularity as a result of the economic uncertainty surrounding the COVID-19 pandemic, which necessitated work that could be done from home. Some mules know what they’re a part of, while others are themselves victims of the scheme, sometimes being cheated of their salaries and any outlay they have taken on themselves when the fraudster drops their services. There are other dangers too, as noted in Chapter 20.

Mules are often used outside of the AVS spoofing use case, as they’re valuable in many ways. For example, even if the address isn’t a perfect match, if the mule lives in the right area a fraud prevention team might consider the address to be legitimate. Also, mules expand a fraudster’s reach through click-and-collect dramatically. They can be relied upon to reship goods, meaning that fraudsters from a country in Eastern Europe can easily receive packages via a respectable-looking address in the United States. They can even set up accounts or place orders, to make the IP a good match for the identity being impersonated. Chapter 7 discusses shipping manipulation and mules in greater detail.

As fraud fighters, it’s our job to analyze and make the most of every piece of data we have and can collect or source to piece together the story, whether fraudulent or legitimate. Ultimately, we must bear in mind that it’s not the data points themselves that are being judged, but what they mean within the context of all the information we have in this case, and how it fits together.

Deception Techniques

Deception techniques can be part of a successful impersonation, especially in the case of fraudsters using sophisticated malware to scrape customers’ online appearances, but they are also vital when carrying out a blank slate attack using a fake identity. The point of these obfuscatory tricks is to conceal the real location, device, and so on of the fraudster; they are distinct from impersonation techniques, which aim to ape the details of a real and specific person.

With enough determination and tech savvy, anything and anyone can be manipulated. Experienced fraud fighters who have seen state-run or state-funded malicious actors at work can attest to the fact that when the attackers are really motivated, they can look indistinguishable from good customers. We won’t go into the deeper and darker forms of manipulation here because, as mentioned in the Preface, we suspect that more than one fraudster will read this book at some point. In any event, junior fraud fighters are generally introduced to the nuances of deception techniques early on in their careers.

However, we’ll pick out a few of the most common ones, largely to make the point (not for the last time!) that most of the suspicious elements here can also have innocent explanations. Fraud analysts are often better at unmasking obfuscation than they are at remembering to consider potential legitimate scenarios in conjunction with the masking behavior. That road leads to unnecessary false positives and frustrated customers.

Consider IP masking (more on this in Chapter 6). Virtual private networks (VPNs) and anonymous proxies (mostly Socks or HTTP) are the most common methods, though you do see Tor used from time to time. There are even services that allow fraudsters to shuffle through fresh IPs for each new attack, and some allow the fraudster to match the IP to the address of the cardholder. That said, there are also plenty of reasons for a good customer to use VPNs or proxies: notably, privacy and avoiding content restrictions (which, while perhaps a form of abuse, is not fraudulent in the sense that we’ll be using the term in this book). Now that working from home is common, VPNs are popular with many companies seeking to make it safer for their employees to work remotely. Even Tor browsers are used by particularly privacy-conscious but entirely legitimate individuals (in fact, fraudsters rarely use Tor because they know it looks suspicious, to the extent that seeing a Tor browser in use can actually almost be a positive sign).

Even the simplest kind of obfuscation, such as a disposable email address, can have good explanations, though this is a matter of context. A real customer is unlikely to use a email address for their bank—but they are likely to use it for a site they visit rarely to protect themselves from spam. That’s even more true following Apple’s introduction of Hide My Email. So, depending on whether you’re working at a bank or an online reviews site, this may or may not be a relevant signal. Fraudsters also know that customers use good email addresses for their banks, so if they’re creating a new one it will likely be very plausible, perhaps simply substituting a 0 for an o or mixing up the first name/last name pattern in a different way. These tricks make an email more suspicious…except, of course, that real customers sometimes do this too.

Additionally, it’s vital to be sensitive to the different kinds of emails seen in different cultures; depending on the profile of the user, an email address with certain numbers in it may be a good sign or a bad sign. For example, numbers that are considered lucky are often used in Chinese email addresses, and if the rest of the purchase story also suggests a Chinese shopper, this is a positive sign of a consistent legitimate story. But the same number does not have the same meaning in European countries and is far less likely to be part of a legitimate story there.

The same point can be made about changes made to a device’s user agent (the string of characters that contains information about a device). This is a mine of valuable information for a fraud analyst, giving information about the operating system, what kind of device it is, which browser the user is employing, which languages the browser has, and so on. Sometimes there will be signs that the user is playing with their user agent. You might see an apparently different user agent profile coming from exactly the same IP several times over an hour. On the one hand, this is suspicious. On the other, these settings are easy to manipulate, and sometimes good users do this too—notably, developers, web designers, and marketing professionals trying out different settings to see how their product looks on different devices, browsers, and so on.

Social Engineering

As a deception technique, social engineering perfectly masks the identity of the fraudster. When it is achieved successfully, the fraudster becomes a puppeteer (i.e., puppet master, not the headless chrome node, although some fraudsters do favor it). The victim—being the puppet—waltzes through the checkout process with their own email, IP, device, and so on. It is only through strong behavioral analytics and/or a rich prior knowledge about the victim’s habits that such an attack can be completely mitigated.

We discuss the impact of social engineering at length in Chapters 9 and 14. For now, we’ll settle on boosting your motivation to tackle social engineering by calling attention to the FBI’s latest Internet Crime Report, which names social engineering traits as the number one scheme in the United States in 2020 (Figure 1-1).

Figure 1-1. Visualization of total attack volume in 2020, as reported by the FBI in its IC3 report2

It’s important to analyze (and have your system analyze) every aspect of a user’s online persona. The more information you have, the richer the picture you can build of that user. What a fraud analyst needs to remember, though, is that each of these details contributes to the picture—none of them alone is “good” or “bad.” A good fraud analyst can develop both the legitimate and the fraudulent stories in their mind as they examine all the details, thinking of both good and bad reasons for the details to be as they are. Only once the picture is complete can the decision be made as to whether the identity behind the transaction is legitimate or not.

The Dark Web

Since we’ve already mentioned stolen data that can be used in impersonation and we’re about to mention bots, this seems like a sensible point to talk about the dark web. Unlike the deep web, which is simply the unindexed internet (i.e., sites and pages you can’t find with a search engine) and which includes lots of outdated pages, old websites, orphaned pages and images, and so on, the dark web represents online forums, marketplaces, and sites that are actively concealed from search engines, for anonymity. Typically, access is only possible through something like a Tor browser, and many dark web sites have extra restrictions to make it more difficult to access unless you’re in the know.

A lot of the online criminal ecosystem functions through the dark web—particularly through forums, where different attacks are discussed and planned and advice and bragging mix together, and marketplaces, where the tools of the trade are bartered. Some marketplaces specialize, while others are broader. For instance, one marketplace might only sell stolen consumer data, perhaps with a particular emphasis on payment information such as credit cards and PayPal accounts. Another might have a wealth of apps designed to make fraud easier and faster, such as apps that quickly change the details of your online persona (IP, language on the computer, time zone, etc.). Yet another might focus on illegal goods of various types. Some cover all of the above, and more.

We’ll mention the dark web from time to time, generally in the context of how it enables certain fraudster attacks or techniques, and it’s certainly an important factor in understanding the online criminal world. Some companies, and some vendors, have fraud fighters dedicated to spending time on the dark web in order to get advance notice of new techniques or tools and to try to get advance warnings of an attack being planned against their own business.

That said, it’s also important to recognize that a lot of criminal chatter, planning, scamming, and even selling takes place on sites that are far more familiar to the average citizen, like social media sites and messaging apps. Telegram has become particularly popular with many fraudsters due to its higher-than-average levels of privacy, and Signal is sometimes used as well for the same reason. Refund fraud, which we talk about in Chapter 10, is a good example of how Telegram enables criminals and ordinary folks to interact for the profit of both (though to the detriment of the merchants they attack). Discord has also become a popular forum for gaming-focused fraudsters to congregate and run their schemes. Reddit is popular with fraudsters of all kinds, sharing tips, tricks, and boasts.

Fraud Rings/Linking

Fraud ring is the term used to describe multiple accounts or transactions that appear on the surface to be unrelated, but are actually part of a wider fraudulent pattern carried out by a single fraudster or a group of fraudsters working together and/or copycatting one another. The term linking is frequently used fairly synonymously with the term fraud ring, and you can assume in this book that when we use either term we’re talking about the same thing.

Finding the details that point to a pattern indicating the presence of a fraud ring is valuable for fraud teams because you can then protect your ecosystem from future attacks of the same nature carried out by the same ring, or from further actions taken by accounts that are part of the ring but haven’t yet done anything themselves that would get them blocked as fraudulent. Linking is so called because it finds the links that show similarity between entities. Once you’ve identified a strong pattern, you can see the pattern of the accounts or transactions that match it, and act accordingly.

When a fraud ring is revealed, most organizations favor short-term protection. For example, if a fraud ring originates from Ecuador, a bank may decide to route all cross-border activity in Ecuador for manual inspection for several months. An ecommerce retailer may even choose to decline all orders from a certain country or region for a while. This type of solution, besides being technologically easy to implement, relies on the common fraudster trait of “if it works, repeat it.” Fraudsters typically produce the same type of fraud over and over again until they are interrupted.

However, it’s important to note that for every fraud ring you’ve kept at bay, there’s a more sophisticated version that has evolved from its predecessor. It’s not a “solve and forget” sort of challenge. Your fraud prevention teams should also keep in mind that “surgical precision” in flagging fraud rings bears the inherent risk of overfitting (see Chapter 5 for fraud modeling best practices if you’re not familiar with the term). As with other elements of fraud fighting, what your team needs to aim for is balance: between the risk and the likelihood of false positives, and between the fear of loss from fraud and the certainty of loss of good business if broad blocks are put in place.


It’s a truth universally acknowledged that fraud analysts shouldn’t go to work expecting every day to be like the one before it. Fraud attacks have fashions like anything else, and are sensitive to different times of the year—mirroring the behaviors of legitimate shoppers—and of the shifts and events in your own business.

In addition, fraud fighters never know where a fraudster or a fraud ring will strike next; you might have a reassuringly consistent level of fraud attacks for a month, and then out of the blue get hit by a tsunami of brute-force attacks, whether human or bot generated. And the trouble is that the fraudster trait associated with volatility is a “rinse and repeat” mentality; the second they find a weakness in your ability to handle volatility, they’ll double down and capitalize on it as long as the vulnerability is there. They may even tell their friends, gaining street cred for the tip and boosting their reputation, and opening the option of a mass attack where that can be effective.

If your system isn’t set up appropriately, it may take a long time for you to notice the problem while you’re dealing with a flood of customers during a busy period. That can lead to significant loss.

Your store may experience fluctuations every year from Valentine’s Day traffic, Mother’s Day and Father’s Day traffic, back-to-school traffic, and year-end holiday traffic—which may itself begin or end earlier or later, depending on the year. This natural volatility comes with its own challenges for your models, but unfortunately on top of that, fraudsters will try to exploit it to the fullest as well.

Fraudsters know which items are popular at different times of the year and will target those, blending in with the rush of good customers. Similarly, if you’re a business that holds flash sales, fraudsters will be as aware of that as all your legitimate users are, and they’ll use the knowledge to their advantage. They may also mimic a real pattern of last-minute orders, or first-millisecond orders (as when customers try to hit the Buy button the second desirable tickets or goods go on limited sale), a tactic that’s especially popular with fraudsters who act as ticket scalpers. Fraudsters also explore the possibilities of attacking at different times of the day; some boast of keeping track of customer behavior trends as reported in the news and by reports from companies that study these things so that they can leverage this knowledge to help them fly under the radar.

What is crucial from the fraud-fighting perspective is that your system is able to cope with all the variety that fraudsters can throw at it. Rules must be adjusted for different times of the year, machine learning systems must be able to scale quickly as necessary, and manual review teams must be prepared and ramped up for busy times of the year.

Sensitivity to fluctuations is important on both the fraud-fighting side (you want to stop fraudsters from attacking your site) and the customer experience side (you don’t want to be so averse to risk that you add unnecessary friction, delay, or false positives for good customers).

Take simple velocity—when a single user tries to make several purchases in a short period of time. That might be a fraudster, capitalizing on their success or trying different points of attack or refusing to believe they’ve been caught and blocked (depending on the situation). Or it might be a good customer, returning for more items after having discussed it with a friend or family member and deciding they need something else, or a customer who is ordering in careful batches to avoid import taxes, or a customer who was blocked once, mistakenly, and is determinedly trying again. Once again, it’s about the whole context—the identity, the story—and not the specific data points.

Relatedly, bot attacks, which occur when a fraudster uses an automated program to attack your site repeatedly, trying different data each time, can dramatically increase the number of transactions (and usually the number of attacks) your system has to handle in a short period of time. The same can be true of brute-force attacks, in which a human is likely behind a similar ramming effect. You may be a business that sees a nonfraudulent use case, for bots if not for brute-force attacks (it’s hard to think of a good reason to try multiple attempts to hack into different accounts with stolen information). If you’re in an industry where resellers are part of the ecosystem and you know they sometimes put great effort into getting the latest items, then bots are not out of the question when a hot new item is about to hit. How you react to that depends on your company’s policy toward resellers, but from a fraud analytics perspective, what matters is that you know what’s going on, and when it’s the same people trying again.

Your team will need separate training sets for different models, whether you’re using rules or machine learning or both. For instance, you’ll need a training set that’s sensitive to average volatility so that you can catch a fraud ring when it appears out of the blue one fine, ordinary day. But you’ll also need a model that’s more volatility agnostic, which would be a fit for Black Friday–Cyber Monday. This would need to be more forgiving of volatility, and your team would need to be more present to analyze and guide as necessary. In the same way, you would want to work with the chargeback team to develop a model that works for January and perhaps February, when the chargebacks come in after the holidays.

Fraud prevention is not a one-size-fits-all type of business. You—and your models—need to be able to adapt to different times of the year and different situations. Much of this can be prepared for, but it’s also important to carry out continual assessments during volatile times in order to make changes as necessary on the fly. That’s true even if your manual review team is overwhelmed by a flood of orders. Making time for bigger-picture analysis, even during busy periods, will take some of the weight off the manual reviewers and make sure your system is far more robust against the threats it faces.

Card and Account Testing

Not all uses of a stolen card or hacked account are for fraudsters’ immediate profit. It’s common for fraudsters to test whether they’ll be able to leverage a card or account by making a small purchase or taking a small step in the account, like adding a new address. If they’re blocked right away (perhaps the card has been reported as stolen already), they’ll give up and move on to the next one, having only spent a minute or two on the burnt one. If it’s smooth sailing, they’ll be willing to invest effort in setting up a richer profile to leverage the card or account for larger amounts.

The nature of card and account testing means fraudsters generally gravitate toward sites or apps that will have fewer defenses against their low-value attack, either because they’re in an industry that has not traditionally invested in fraud prevention (such as nonprofit organizations) or because their goods are low value but must be delivered under time pressure (such as food delivery or low-value gift cards and other digital goods), which means the fraud team may not be able to invest much effort into small-ticket purchases. However, other kinds of sites also experience testing, sometimes as part of a fraudster’s investigation to map out the typical purchase process and sometimes to build up a little bit of a legitimate-seeming profile on the site before attempting larger-scale fraud.


Keeping track of card and account testing is valuable for a fraud prevention team because, if identified, it can help to profile the fraudster or fraud ring behind the attempts, making them easier to identify in the future. Remember, it’s not what, it’s who. You may also be able to identify patterns between the timing or type of testing and larger monetization efforts.

Abuse Versus Fraud

The distinction between abuse and fraud is a tricky one, and there isn’t widespread consensus among fraud prevention teams and the companies they protect about where to draw the line.

In general, fraud is a more professional affair, carried out mainly by actors who specialize in this form of crime. It’s likely to include the impersonation or deception techniques we’ve discussed, and fraudulent attempts will often reflect knowledge—sometimes quite deep knowledge—of the site’s products, processes, and vulnerabilities.

Where the gain involved is directly financial, abuse tends to be more the province of normal customers who want to get a bit more than they’re entitled to by cheating. Sometimes this cheating goes so far as to become really fraudulent, taking it into the realm of friendly fraud, discussed in Chapters 2 and 3, and more deeply in Chapter 10. Consider programs set up to attract new customers, either directly or through referrals; sometimes credit with the store is offered as part of the incentive. Really motivated abusers can set up multiple accounts, cashing in on the offer for what is effectively cash. When they leverage this credit to buy items for free (at least as far as they’re concerned) the business loses out, sometimes substantially.

In this and similar cases, whether or not these activities count as fraud and fall into your lap as a problem is up to how your company views them, and in many cases where they fall on the spectrum between a focus on stopping fraud and a focus on customer satisfaction. If the company prefers to optimize for customer experience, it may take many repeated instances of severe friendly fraud before it’s willing to block a real customer. If it is risk averse, you may be charged with preventing friendly fraud—something that is almost impossible on the user’s first try, unless you’re collaborating directly with other merchants, preferably in your space, who may have seen their tricks before. Repeated offenses can stack up quickly, though, and it’s important to be clear about your company’s policy regarding when and how to deal with these abusers.

There’s another kind of abuse that isn’t quite so severe and which, unfortunately, may sometimes be carried out by your most enthusiastic customers. It’s usually referred to as promo abuse (abuse of promotions). If your site is offering coupons, a discount on the first purchase, or a similar offer, customers who like your site or the products you offer may consider setting up a new account (or accounts) to take advantage of the offer multiple times. Or if you have a generous returns policy, customers may use an item and then return it. These activities aren’t exactly fraud, they’re…cheating. The industry generally refers to it as abuse.

Whether or not your fraud prevention team is responsible for catching this sort of activity is often a reflection of how fraudulent your company views these behaviors to be. Marketing or sales may request your help in protecting their coupon offer, so it’s good to be prepared for the need, but it’s unlikely to be a standard part of your responsibilities unless it becomes a real drain on the business. In that case, judicious friction can sometimes be enough to deter these abusers. When they go further and start using the sorts of tactics we might expect from a more professional fraudster—IP obfuscation, burner wallets, emails and phones, and so on—you can fall back on the defensive tactics you usually take against career fraudsters. Being allowed to use your skills in these cases depends on whether you can persuade your company that it’s warranted. You need strong relationships with growth-driven departments to be able to give them the context they need to understand the loss, appreciate that it doesn’t serve their real goals, and have your back when you try to stop it.

There are other forms of abuse, however, that do not lead to a direct financial benefit to the abuser and can be part of wider fraud attacks. Even if your team doesn’t have formal responsibility for stopping them, you’ll probably want to be tracking this sort of behavior because it can often help you to prevent fraud later down the line. It can also help you protect the business in wider terms, which is something you should make sure upper management is aware of and appreciates.

Account creation is a good example of this kind of abuse. A fake account might be part of straightforward coupon abuse—but it might well be the first step in aging an account that will, after some period of time, be used for fraudulent purposes.

Catching this sort of abuse is similar to identifying fraud; you’ll want to look for patterns between the identity that’s setting up the new accounts, or posting the reviews, and so forth, to show you that the same person is ultimately behind them all. If you can get a useful profile of this actor, you can use it to ensure that they don’t succeed in more direct fraud later on. Beyond that, identifying these accounts is valuable because it means your business leaders won’t have a mistaken view of its users or number of accounts, which could lead them to make problematic decisions based on bad data.

Content abuse, or review abuse, fits a similar pattern. Fake reviews can be from real customers trying to boost their profile or a friend’s product or service, or they can be part of a wider fraud scheme. Fake reviews can make certain businesses, which are perhaps merely fronts, look legitimate. They can also help a fake account look more substantial. They pollute your ecosystem, undermining customers’ trust in your site and its products or services.

Money Laundering and Compliance Violations

Money laundering is a natural concern for banks, fintechs, and other financial institutions, and in those contexts there are generally teams dedicated to stopping it—not to mention, of course, regulations and tools dedicated to its prevention.

Anti–money laundering (AML) work has been a concern of banks and financial institutions for many years, certainly, and as fintechs and cryptocurrencies have joined the financial ecosystem, battling money laundering has become an important part of fraud prevention in those organizations as well. In fact, the booming market for cryptocurrency has boosted money laundering efforts—and prevention work—everywhere, especially now that a number of banks allow customers to turn cryptocurrency into other forms of currency. Setting up new accounts for this purpose has become fairly commonplace, which has added urgency to the need to prevent money laundering in cases where the money is not legitimate. Since it is so difficult to tell where cryptocurrency is coming from, the emphasis must be placed on the person setting up the account. Is this a real identity? Is the person setting up the account the person that identity belongs to?

Fortunately, banks have had considerable experience authenticating identities of customers who want to set up new accounts, and most have streamlined processes in place to verify identity documents and ensure they are authentic and belong to the individual trying to set up the account. In-depth and often AI-assisted document validation has been joined in recent years by liveness checks and selfie authentication to ensure that the person involved really is present and not a photo or a deepfake (see Chapter 11 for a discussion of deepfakes in this context), even if the onboarding process is being done entirely online. There are, of course, also manual processes that help ensure that no tampering has been attempted with either the document or the photo, to support the work to prevent successful impersonation.

More difficult to identify are schemes that involve using real customers, with real identities and real accounts, who allow those accounts to be used by money laundering agents. In some countries, it is illegal to prevent a citizen from opening an account if their identification is in order and legitimate, even if your team suspects the motivation of the individual concerned. In these cases, fraud teams can only track these accounts and their activities particularly carefully once they are set up. This restriction puts considerable pressure on the fraud team and gives the criminals an added advantage.

It’s worth noting in this context that the criminals involved with this sort of financial muling scheme are typically involved in large-scale organized crime, and are not small-time actors. On the one hand, this means they are well funded, organized, and difficult to catch, but on the other hand, it often means that finding one vulnerability will open up a whole scheme to a fraud team. Collaborating with other financial institutions can also bear fruit quite effectively, since organized crime usually attacks multiple targets.

Money laundering, and AML work, is commonly understood within the context of banks and financial institutions, and we discuss this in greater depth in Part V of the book. What is less often recognized is that money laundering is also possible in online marketplaces—and in some ways is much easier to do there because the defenses against it are less robust and marketplaces do not have the same compliance responsibilities as financial institutions. The principle is simple. In a marketplace, buyers and sellers interact. (We use the terms buyer and seller to refer equally to the actors involved in ride-sharing, apartment rentals, car rentals, or anything for which there may be an online marketplace.) A buyer will send money to a seller. The seller receives the money, which can be cleanly laundered in this way.

For example, let’s say a fraudster wants to clean their ill-gotten gains and is using an online marketplace to do it. If one fraudster (or one fraud ring) acts as both buyer and seller, they can place orders for perfectly legitimate products or services, and pay for them. The payment, of course, goes back into their own pocket—as the seller. In their seller capacity, they have acquired the money completely legitimately…except for the small fact, of course, that the product or service was never sent or received. In general, the product or service is nonexistent, with the seller account existing purely for the purpose of laundering money.

In the same way, perhaps a criminal wants an easy, safe way to collect payment for illegal items online. They can set up an account as a seller on an online marketplace and use it to receive payment. Different items in the store can correspond to different illegal items they sell. For example, if they were selling drugs but pretending to sell apparel, heroin might correspond to a designer handbag while cocaine might be a cashmere sweater, and so on.

To be sure, there’s a small fee to the marketplace, but it’s really a very small amount when you consider the benefit the fraudster is reaping here: freshly laundered money, an easy and unsuspicious way for customers to pay for illegal items, and all so simple. They don’t even need to impersonate anyone or use deception techniques.

We bring this up to show the relevance of money laundering to the wider online ecosystem as a concern, since fraud analysts are not always aware of this factor. More than that, though, we want to make a point that is fundamental to successful fraud prevention and one we will try to make more than once during the book.

We—the fraud fighters, the good guys, the ones on the side of the light—tend to think of the different aspects of both fraud and fraud prevention in terms of helpful categories that give us clarity and a framework. We distinguish account takeover (ATO) from stolen credit card fraud, and both from AML. We distinguish between trends in ecommerce, in marketplaces, and in banks. We consider different kinds of fraudsters. This book does this too—one look at the contents page will show you this. It’s a useful way to understand the scope and levels of the challenges facing us, appreciate different aspects of different types of fraud and fraudsters, and see how they fit together. But fraudsters don’t think that way. They are out to defraud your company, and any other company out there. They don’t care about our categories, and if we’re too wedded to those categories, they’ll exploit them as vulnerabilities.

Fraudsters move smoothly between stolen cards, ATO, social engineering, money mules, shipping manipulation, and even money laundering, pulling out whichever trick might work in the circumstances in which they’ve found themselves. AML professionals and fraud fighters would both benefit from an awareness of each other’s categories and concerns. (This is, in fact, the primary reason we combined both aspects in this book.)


This chapter sketched out definitions and distinctions relating to fraudster traits and types of attacks, which are some of the key building blocks of understanding fraudsters for the purposes of fraud prevention. We also emphasized the importance of the identity behind a transaction, something a fraud analyst should bear in mind even when focusing on the details such as IP or email analysis. The next chapter explores the other half of the picture: the different types of fraudsters, their skill sets, and their motivations.

1 Cole Porter, “You Do Something to Me,” in Fifty Million Frenchmen, music and lyrics by Cole Porter, book by Herbert Fields (1929).

2 FBI Internet Crime Complaint Center, 2020 Internet Crime Report, accessed March 4, 2022.

Get Practical Fraud Prevention now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.