Chapter 1. Was It Good for You?

Linda Wilkinson

Do you hear that group of people snickering in the corner? They just found out that the third-party consulting firm you hired tested their code in production and sent 14,000 form letters out to your customers with a return address of “Bertha Big Butt.” While the CEO and executive management team are sweating bullets and preparing mitigation strategies, your testing team is trying (without success) to stifle their guffaws.

Testers think differently than the rest of the IT team.

It’s not that they don’t appreciate the seriousness of the situation. They do.

It’s just that it’s…well…it’s FUNNY.

If you’re going to manage or work with testers, it stands to reason you have to understand testers. They march to the beat of a different drummer when compared to the rest of the IT staff.

Have you, or has anyone you know, ever associated with someone who works in a hospital emergency or operating room? You’ll see a similar phenomenon there. In order to cope with what is often difficult, stressful, and depressing work, the medical staff have a tendency to develop what would appear to be a somewhat macabre and bizarre sense of humor. And you want that kind of behavior. It’s infinitely better for your future health and well-being that your surgeon not be weeping copiously, hands shaking, into your internal organs….

Testers are trained to find and report problems. They view their contribution as helping the company, the development organization, and the customer or end user by exposing risks. They do that by finding and reporting software anomalies, often contributing information about the consequences of those errors.

Are testers policemen? Not usually. They can’t “arrest” anyone for breaking a law, even if the world of software development could figure out what those laws should be, and most do not have the authority to keep something from moving to production, regardless of the generic goodness (or badness) of the software.

Their roles are more that of advisors. In fact, it is difficult and somewhat unfair for an organization to place testers in “gatekeeper” positions. A “gatekeeper” is someone who has to bless the software as Good before it can be moved to production. Most testers have difficulty balancing risk against need, marketing requirements, and cost. When you think about it, assessing and accepting risk is really a project and/or executive management task.

Testers know that no matter how many errors they’ve found and have been fixed, there are more lurking somewhere in the code. They are often reluctant to “bless” anything as Good. This means your project might be held up for a very long time while your “gatekeepers” make damn sure everything is as error-free as possible—which is often far beyond the point where it is economically intelligent to continue to find and fix errors.

What’s more, you can actually end up training your testers away from finding and reporting errors. Instead, they spend their time attempting to assess how important each error is, when balanced against all of the considerations that feed into go/no go decisions. They might even lose their unique perspective and sense of mission—not bothering to write up what they discover, judging it to be “unimportant” in the scheme of things as they understand it. The problem is that their perspective is inevitably limited. Testers are not end users. They are not marketing experts. They are not project managers, vendors, accountants, or executive managers. They have valuable information to give you in regard to your project risks, and they should be used and respected in the advisory capacity that allows them to do what they do best: test your products and drive out error. Testers are excellent contributors to any team that might have go/no-go decision-making authority, but there are problems inherent in expecting them to function as the entire team.

Because of their mission, the types of software errors, issues, and problems that keep project managers awake at night, sweating and shaking uncontrollably, are the very things that make the life of a tester interesting.

Software testers know they’re the “dark side of the force.” They often joke about it (“Come to the Dark Side—We Have Cookies”). They view themselves as rebels, as the Bad Guys in the Black Hats, as Indiana Jones, Captain Jack Sparrow, and Sherlock Holmes all rolled into one. You never knew testing groups view themselves as the original Bad Asses, did you? Well, they’re about to kick down your pretty house of cards. And they’re going to enjoy it. Good testers almost always have an “attitude” of sorts. It can make them kind of irritating at times. After all, they never would have tested in production. They would have tried scenario X before shipping. They told you something was wrong and you didn’t listen, did you? Sometimes it’s enough to make you want to hit them with a stick. Especially when they’re right….

And they like finding bugs. Many of the really ugly bugs are especially funny to a tester. Smart testers find out early that nontesters aren’t going to understand them or their humor. They also (sadly) come to realize their role might not be especially appreciated, understood, or rewarded. So they learn not to share their unique perspective of the software world with the rest of the organization.

There is nothing in the IT world that equates to working as a tester. How many jobs in the business venue pay you to tell the truth? Testers are not paid to tell you everything is peachy when it is clear that the only “peach” involved is somewhat rotten. You can expect that kind of misplaced optimism or doubletalk from other members of your project or IT teams. Testers, however, are paid to tell you the truth as they know it. Sometimes that means telling you that your baby is ugly and why.

It’s helpful when you manage or work with testers to understand how they think, which means you need to understand what motivates and excites them about their work.

So what is a tester, exactly? If you were to pick just a few key qualities, one of the first would be that a tester is curious. They want to know how things work. They are experimental. They want to see what happens when they try different scenarios or experiments against what has been presented to them. A good tester is also relatively fearless. They aren’t afraid they’ll break something. They aren’t afraid to tell you the truth about what they’ve found, regardless of your position. And they aren’t afraid to stand their ground and fight to get it fixed if they believe it negatively impacts the potential success of the product. A tester is intelligent, analytical, and learns fast. They are, in fact, always learning. Their jobs require it. Technology changes on a constant basis, and every project they receive is different in some way from the last. Sometimes they have great specifications. Sometimes not. Sometimes they have no written documentation at all. They need the ability to ask the right questions, investigate the right issues, put together the pieces of the puzzle, and draw the right conclusions. Testers are also generally apolitical. If you find a tester who is particularly good at politics, chances are pretty good they aren’t especially great at their jobs. It is very difficult to play political games successfully when your job involves discovering and reporting issues. Testers are often accused of being blunt, rude, not team players, and the like. That’s rarely true. Chances are good that anyone making such accusations does not understand or appreciate the role of the tester on a project team. Their jobs do not allow them to sweep any information that is “inconvenient” under the carpet.

Those are the good qualities of testers. There are other qualities that are less desirable, but still part and parcel of the overall persona of most testers, particularly those with a lot of experience. A tester tends to be distrustful. This is a learned behavior. They’ve been told over and over again that X doesn’t need to be tested or Y code “hasn’t been touched.” That information has been wrong more times than they can count. So you can tell a tester the grass is green and they’re still going to go check for themselves. A tester is critical, and it bleeds into other areas of their lives. They’ve been trained to find and report problems. That means if you send them an email with a misspelling, the entire team is going to helpfully point that out, or any other mistakes you (or anyone else) makes. Testers question everything, and that includes authority. It’s generally a bad idea to try to lie to or finesse a test team with whatever politically correct propaganda that would be successful with some other group of people. You’ll get far better results telling them the bitter truth. It’s the only way to earn their respect and trust.

You may know testing staff who really don’t have any of the qualities previously mentioned. Not everyone who works in a testing organization is a tester. Not everyone with the title of tester is a tester. Some are comfortable, happy, and adept at running existing tests. They aren’t gifted at analysis, curious, or experimental. They may not be particularly fearless, getting easily intimidated by stronger personalities, people in positions of authority, or the thought of having to tackle something new. They may not report bugs, as they are afraid of the repercussions; their primary concern is to not rock the boat. Some may be so “into” politics and their own personal agendas and success that they lose the very qualities that set them apart and made them valuable to the test team. Overall, depending on the size of your team, all types of personnel can contribute and help project efforts be successful, but it pays to recognize and nurture the “real” testing talent on your team.

An executor of someone else’s test ideas may or may not be a tester. A tester, when given a bank of existing tests to run, is probably going to be pretty bored. It’s likely they’ll run them as quickly as possible, just to get them off their plate. This means they may not pay close attention to those tests, missing things a dedicated and thorough executor would find as a matter of course. On the plus side, however, a “real” tester is going to take ownership of those tests. They’ll think about the ideas in those tests, ask questions, add to them, change them, and explore some things the original analyst never considered. If the original analyst was talented, it’s likely they won’t find much to update or add, which will add to the boredom factor for them. You’ll find that, over time, any truly creative, engaged, intelligent tester gets their spirits, initiative, and creativity crushed when the bulk of their jobs consist of anything that is merely rote, and that certainly includes executing large banks of existing manual test cases. It is inevitably best for the morale of your testers to either farm that stuff out to people who find comfort in routine, automate it, ship it offshore, or get it off their plates. They want to be working on something new. They want to be finding and reporting bugs. They want to be adding value that no one else can add.

It’s the tedium involved that makes many testers vocally denigrate running existing regression test banks. You’ll find that most understand the necessity and even agree with it, but it’s like doing a puzzle someone else has already solved. It takes away the joy of exploration and the pleasure of the hunt. Most testers are aware that regression tests find a fraction of the error resident in an area of code; they’d really much rather be finding the bulk of the errors, which are lurking in the new stuff. It’s all about the hunt and the joy of discovery.

So what about that attitude thing? Isn’t it all about working together as a team? Yes. And testers want to be on your team. They want to help; they want to contribute. They want to be appreciated very badly. Their focus, however, makes it hard for other project team members to accept and appreciate their contributions. Even their humor can make it difficult to get them integrated into a team. What’s worse, if you work for the type of organization that is not focused on quality and does not recognize or fix anything your testers have worked so hard to find, a test team is going to view that as a lack of respect for them or their work. And if you don’t give your testers the respect they deserve, you’ll demoralize them pretty quickly, and you will be unable to retain anyone with a skill set that is marketable in your region.

Overall, the testing field as a whole is all very civilized and evolved now, and testers have become better at “playing with others.” Your most experienced testing staff will sympathetically pat you on the back and tell you that everyone knows there’s more to it than just finding errors. They will nod understandingly and totally support your decisions not to fix errors A, B, and C. No one will fling himself to the floor and throw a tantrum. In fact, testers with some years of experience will tell you whatever you’d like to hear, which they’ve learned through experience at your particular company will get them (and thus your organization) the highest quality results. But what needs to be remembered is that it’s likely they’re willing to sacrifice errors A, B, and C in order to get you to fix D and E, which are more serious. Most testing staff secretly want you to fix everything they find. Testers have a definite and distinct bias toward getting stuff fixed. They care that things are wrong, and they want them to be better. When you think about it, do you really want your testing staff to be any other way?

Overall, an experienced test team can present bugs in such attractive wrapping paper (words) and ribbons (understanding of your problems and issues) that it will take you some time to realize the present you’ve just been given is really just an exceptionally large bag of poop. They’re actually just re-gifting; it’s the large bag of poop you gave them to start with, but somehow you didn’t realize it smelled quite that bad when you turned it over to them. And what they say to you in polite and politically correct language and what they talk about back at the ranch with the other cowboys—well out of earshot—are two different things. They’ve learned through painful experience that the “dudes” they work with in other areas aren’t really going to appreciate the humor or enjoyment involved in systematically finding every cow patty in the field and presenting them to the project team in a nice colorful box with a big red bow….

Finding software glitches—bugs—is much like a treasure hunt. Bugs are often hidden, and it takes a combination of logic, technique, and intuition (or luck) to find them. It’s no coincidence that many testers are inordinately fond of puzzles. Testers like to hunt for and find stuff. The hunt is exciting, and finding an error (or an answer) is the ultimate motivation. When a tester finds a bug, they’re earning their pay. From their perspective, that’s one more problem that won’t be found by the end user, one more opportunity for development to make the product better, and one less element of risk for their company. Finding a bug is a Eureka Moment. What a development or management resource regards with unmitigated dislike, disgust, or dismay is actually a Thing of Beauty to a tester. It’s buried treasure. It’s a gold doubloon.

Different testers prepare for a bug hunt in different ways. Their preparations will depend on your environment and your development methodologies. Some will be personal preference. They may write test cases in advance. They may work from a list of notes. But regardless of methodology, some activities are normally common across all methodologies.

They’re going to read everything available on what they need to test. And they’re going to ask questions—many, many questions. They’ll ask questions until they’re satisfied they understand the application as well as possible, and then they’ll decide how best to do their testing and devise a plan. The plan might be formal or it might just be in their heads, but most testers know what they want to examine before they begin testing, and they have some idea as to how the system should look and behave as they begin to experiment.

This is where technique, training, and experience kick in. A trained, experienced tester tends to find more error than their untrained, inexperienced counterparts. This has nothing to with intelligence and everything to do with mentoring and learning. Nor does it mean neophytes never find anything of value. They do. But an experienced tester knows where to look. They know what is likely to break, and they’ve learned what types of techniques have been successful in helping them find bugs under similar circumstances. It doesn’t really matter whether a tester has been “classically” trained (boundary analysis, etc.) or trained in agile technique (heuristics, tours, etc.) or both. Once a tester has learned to read between the lines, look beyond the obvious, ask the right questions, and expand their horizons, you have a real testing powerhouse on your hands. And it’s a testing powerhouse that will continue to learn and add new “tools” to their testing toolbox throughout their career.

Smart project teams take advantage of all that knowledge and intuition. The reason experienced project managers customarily get testers involved early in the project is not because they’re lonely and want some company in their meetings. No, they want those testing gurus asking their questions early in the process, when it’s faster, easier, and cheaper to fix discrepancies. They want the development staff to pay attention to what the testers are going to be looking at so they can develop better code. Testers used in this capacity often help the team find design flaws well before they ever get a chance to manifest themselves as bugs further down the line.

There have been arguments about the roles of a tester for literally decades now. Some feel their role is to “assure quality,” which would be fine if anyone could decide what “quality” actually means. Some feel it is to help development staff build better code by training them to look for bugs and to start building code that doesn’t contain them. Some testing experts focus on why and how bugs are found: the strategy, technique, and nomenclature involved in finding bugs in various environments. All of that is interesting, and all of it benefits the field in some way.

But, in essence, the purpose of testing is to find bugs.

Testers “assure quality” by presenting bugs/issues/discrepancies to project teams and management to help them make better decisions. They help developers become better at their jobs by showing them the types of errors found in their code so they can fix those errors, learn from their mistakes, and stop making the same mistakes in future work. Testers learn new strategies and techniques to help them find more (or more important) bugs. They categorize what they do into new strategies, such as tours, to help train others to find bugs. And if no (or few) errors are found during the testing period, well, that is important information as well.

Any tester will tell you, however, that there are bugs and then there are BUGS. Generally speaking, it’s not the number of bugs that are found that makes things interesting. For example, a tester can find thousands of cosmetic errors in a large web application. What is a cosmetic error? A misspelling. A message to the user that is grammatically incorrect. The wrong color on an icon. Something put in the wrong place on the screen.

Testers don’t like these kinds of errors any more than anyone else does, especially when they find 10 quadzillion (QC technical term for “a lot”) of them. It takes longer to write up the defect for one of these errors than to locate them, and they are inevitably low-priority errors. On the positive side, usually they are also easy to fix and they do get fixed quickly.

You might wonder why anyone would bother with cosmetic errors anyway, but someone who has worked in the IT field for a while would tell you that the end users of a given application might care deeply about issues that you find trivial. Part of it might be something called the “irritation factor.” Sure, that misspelling in the field title or an informational message might not bother anyone much at the moment, and everyone on the project team will agree the level of severity is roughly the same as that of dirt. But to the end user staring at it two thousand times a day, the “irritation factor” is very high. Often a project team has difficulty understanding how minor issues, functionally speaking, can be major issues to an end user. Consider navigation problems—simple tabbing on a screen. If negotiating through a given job function now takes 25% longer than it used to or three extra keystrokes are required, you are potentially impacting the bottom line of your end users. Their jobs, bonuses, or the output of their workgroup might be part of their evaluation process. If your changes lower their output, they would rightfully consider such issues urgent.

So testers report everything they find. Those with experience report severity from their perspective, but generally do not attempt to dictate business priority. Often their understanding of business priority, like the development team’s understanding of business priority, is somewhat incomplete and not based on personal experience with the job function. On occasion, disciplining oneself to not “speak for the users” can involve swallowing one’s own tongue. It is very common for business users to be willing to “live with” code that contains grievous errors, but insist that something that appears inconsequential or trivial get fixed or added at the last minute. What can you say? It’s their dime. The only advice you can give a tester under such circumstances is Let It Go.

If end users are willing to work around serious issues, that’s their decision. It generally does not go over well to dictate to people what they do or do not want. The job of the tester is to seek, find, and report, not to pass judgment in some sort of godlike capacity. Testers should feel free to offer their professional opinions; in fact, everyone on the team should feel free to offer professional opinions. Ultimately, however, the people that need to weigh in on impact to the business users are, well, the business users themselves. A difference of opinion in regard to production-readiness needs to be escalated and passed up the chain to executive management. Part of management’s job is to assess risk and make hard decisions for the company. That said, the bias of the tester should be (and usually is) toward getting errors fixed.

One of the saddest situations in the field is one where the tester does not report all of the errors he finds. The reasons can be myriad, but the most common is the feeling on the part of the tester that there is no point in reporting certain types or categories of errors because they’ll never be fixed anyway. This is “learned” behavior, and you’ll normally find testers with this type of attitude to be disillusioned, jaded, cynical, and uninterested in their work. Their interest and desire to report bugs has been beaten out of them over time because of their working environment. Another reason may be that they’ve been convinced that, politically and practically, it’s not “smart” to report everything they find. They should report only what the company cares about. Well, if the company isn’t getting a complete picture, how do they know whether they care or not?

Everyone is aware that many errors cannot—or from a financial perspective, should not—be fixed prior to production. Part of the “art and craft” of successful project management is making the right decisions as to what to defer and what to fix. For example, say the project team decides to fix 14 errors and defer 32. But the tester opted not to report 324, because development “never fixes” field errors. This means the project manager and upper management staff are making decisions based on faulty, incomplete information. In this case, the UI is probably not yet ready for Prime Time.

In addition, reporting every error, even in a company with a history of not addressing certain errors, can eventually turn around corporate policies (or “that’s the way we’ve always done things”). If a tester reports 40 errors, none of which get addressed, the application goes to production, and the users report those same errors with urgent priorities and demands that they get fixed as soon as possible, then development and project managers will start to pay more attention to those types of bugs in the future.

Overall, however, reporting cosmetic errors is time-consuming and isn’t overly exciting for most testers. They do it because they are obligated to do so in order to provide a complete and accurate picture of the state of the application and because those errors might matter a lot to an end user.

So what kinds of bugs do make the life of a tester worth living?

The nasty ones. The complicated, multifaceted errors that seriously impact the ability of the end users to do their work. Those that are subtle and have serious impact on some process down the line. And those that cause an application to tank. (“Tank” is another “scientific” QC testing term for going belly-up like a dead fish.)

To understand the nature of an “ugly” error, some understanding of the development process is necessary.

A standard GUI, UI, or screen error is typically the result of an oversight. Either someone left something out because it wasn’t specified, they misunderstood what the user wanted, they misinterpreted a requirement, or they simply misspelled a word. These errors are usually easy to find, easy to recreate, and easy to fix.

Beyond that, however, things get more complex and therefore more interesting. A developer is often working in somewhat of a vacuum on her own piece of code. That piece may be invoked by multiple other pieces of code and feed data into still other modules/applications or some sort of database. Consider that at the time developers write their code, all of the other pieces of code they need to interact with are also under development. This means that developers, in order to test their code, are likely to “dummy up” (simulate) the input they’re supposed to receive and feed the data to yet another set of dummy entities, examining the end results as best they can at the time.

The problem is that the actual data received might be different from what was anticipated. Consider how many changes are made during the life of a project. It may turn out the data output by any one piece of code is no longer in a format that can be used by the next entity down the line. So even an excellent developer who does good unit testing is likely to run into issues at those points where his code intersects with someone else’s code.

It’s at these points—where code and applications interface with one another—that testers often find the majority of their most significant errors. There are a few lessons that can be learned from this.

The first is that testers inevitably benefit from understanding the virtual design of a system. It tells them where to focus their effort and highlights where errors are most likely to “hide.”

The second is that testers understand that testing isn’t “done” until they’ve followed a given piece of the puzzle all the way through the entire maze.

What is meant by that?

Say I develop a piece of code that collects information from a business user, massages it, and sends it to a database. So I, as the developer, test exactly that and verify my data is properly stored in each appropriate database field.

Much to my surprise, the tester finds 37 bugs in my code.

What the heck happened???

Well, it’s likely that I only used “good” data for my own tests, lacking both the time and the desire to break my own stuff. I might not have fully understood what the end user was going to do with the data, I might have massaged it incorrectly, and it may have populated the database with data that could not be retrieved and formatted properly by other programs. Those programs that interface with mine might not have been expecting the data in quite the format I provided (or anticipated a given field might be empty), and therefore errors manifested down the line. When my “massaged” data is actually retrieved by the business user, it might not be displayed in the way they require, necessitating changes in my code, the database, and the retrieval code.

The tester, unlike the developer, tests all the way through, from A to Z. This doesn’t mean they can’t assist with more limited testing and help out with more of a unit type of testing. It means they recognize the testing isn’t “done” until a given piece of information is taken through the entire process, from beginning to end.

Good testers are also creative and imaginative. Testing is usually a destructive process, which is why a great deal of care needs to be taken if decisions are made to run a test in production. A good tester is not necessarily trying to prove software works correctly; they’re trying to prove it doesn’t. That difference in attitude is one of the primary reasons testers find so many bugs. They want to find bugs. They analyze all of the information available and sit down and think about how they can break the application. There is no one else on the team with that kind of mission. Developers customarily aren’t even given enough time to reliably create their own code, let alone try to find sufficient time to think about ways to break it. End users typically just execute what they normally do in the course of their jobs and might actually be panicked and upset if something “breaks.” Testers, on the other hand, are going to fearlessly wade in there and kick the tires as hard as they can, and they’re going to be happy if one of them blows up in their face. They’ll be even happier if one of the doors falls off instead, or if their kicking makes the engine fall out.

This is just a validation of what your mother always told you. If you only look for the bad in people, that’s all you’ll find. Testers are systematically looking for the bad in a system. Along the way, they’ll end up verifying what is working correctly as well. But their focus is on driving out what is wrong, not what is right. If your only goal with testing is to prove the system does what it is supposed to do under perfect conditions, your developers will tell you it does exactly that, and you can save yourself a pile of money.

Ah. You don’t believe them? Neither does your testing team.

So how do you work with these quirky people? How do you motivate and integrate them into your team? How do you encourage them to do what they do best: find and report error?

First of all, recognize and appreciate their contributions to your company and project teams. Involve them in project efforts early, like every other pivotal member of the team. Listen to what they have to say and pay attention to what they find during their testing. Fix some of that stuff and give them a pat on the back when they’re working late due to buggy code or late deliveries. Try to express a “woo-hoo!” when a particularly heinous error is uncovered; it will show them you “get it” and understand their work and how they think. When you give kudos to the team at the end of a project, include their names as well and thank them for their efforts. If they’ve done a particularly fine job, send them an email; copy their bosses and yours. Testers are like any other staff members: they’ll knock themselves out for people who they know will recognize and thank them when they go that extra mile.

Recognize that choosing testing as a career requires some level of bravery and commitment. Many testers spend a significant portion of those careers striving for recognition, respect, and success. You can’t go to school at this time and learn what you need to learn to become a successful tester. You need to learn at the School of Hard Knocks. To add insult to injury, many other IT professionals think “anyone can test.” They do not recognize any difference between an end user, developer, BA, or person off the street performing testing, even when their own numbers clearly show their testing staff finds 1,000% more error than any of those other groups. That’s not a coincidence. Do you pull in people from the street to do your programming? This is exactly the same thing. Testing involves more than just a warm body or mere intelligence; it also involves technique. You’ll find that if you treat testing staff with the same type of respect you would give to your development staff, DBAs, etc., you will encourage and build the type of testing organization that will attract and retain top personnel.

If you’re in a position to do so, reward your test team with an even hand. Testers who are paid and rewarded significantly less than your development team will move to greener pastures when the opportunity arises. Those who remain will be cynical and uninspired. Hard as it might be to believe, it is more difficult to find good testers than it is good developers. Most talented IT professionals want to be Luke Skywalker, not Darth Vader. You also need some talent on “the dark side of the force,” so you need to encourage strong testing resources to stick around.

If you understand the issues testers go through to learn and grow professionally and their hunger for recognition, respect, and success, you will invest in (or encourage your company to invest in) training for your testing personnel. Software testing is a specialized field, and at the time of this writing, good courseware at standard learning institutions is limited, to say the least. Bring in some talented teachers and experts to offer their ideas and expand your team’s capabilities. Send them to conferences. Encourage a learning atmosphere. Allow them to try new techniques on small projects.

If you show your test team you value them—through training, recognition, and a policy of equal reward—you will end up with a team that will walk on hot coals for you, and they’ll recommend your company to their equally talented cohorts in the field. You’ll attract and retain The Best of the Best, and that does not necessarily equate to the highest paychecks in the field.

So the next time your testers put in 500 hours of overtime due to late code deliveries and buggy code, and drive out 1,200 errors, 40% of which are showstoppers, tell me…

Was it good for you?

Well, it was good for them. And, ultimately, good for your customers, your company, and your bottom line.

Get Beautiful Testing now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.