Harnessing the power of conversational interfaces in security

The case for chatbots in the modern security operations center.

By Courtney Allen, Bobby Filar and Rich Seymour
October 19, 2017
Conversation Conversation (source: pooch_eire)

I recently sat down with Bobby Filar and Rich Seymour, Senior Data Scientists at the Endgame, to discuss the benefits, challenges, and value of using a chatbot interface in modern security operation centers.

As you see it, what are some of the most common challenges facing most modern security operations centers?

In talking with SOC managers, they breakdown challenges into three distinct areas. First, they lack sufficient staffing personnel resources. Everyone is well aware of the workforce shortage, and so recruiting and retaining quality, experienced personnel is difficult. SOC managers sometimes have to steer towards inexperienced analysts to fill Tier 1 slots. This workforce shortage leads to the next challenge – the necessity for tools to help these teams with limited experience and/or personnel stay on top of the exponential growth of data and diverse attack tactics. These teams need an intuitive UI and automation to make the collection and synthesis of data as easy as possible. Finally, while there are commonalities across SOC workflows, each organization has unique constraints and requirements. In many cases, security platforms brought in by organizations are extremely rigid and require a level of expertise or training in a proprietary language to be effective. This combination of challenges are what led us to the creation of a chatbot, opting for real-time data alert and triage without requiring expertise in a proprietary programming language, but rather simplified through a natural conversation model.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

What are the benefits of using a chatbot interface in a security operations center?

We decided on using a conversational interface as a response to user interviews and understanding SOC analysts pain points. Their pain points ranged from too much data to repetitive processes to just an overall lack of time required to efficiently respond to incoming alerts. We looked across industries for assistive technologies we could potentially leverage to provide users with an intuitive interface that could make recommendations and automate day-to-day tasks. We believe intelligent assistants are capable of allowing analysts to gather data more intelligently and respond to alerts faster, by coupling the natural language query capability with an ability to “see” and “remember” what the user has done. The ability to provide users with recommended best practices or shortcuts to complex queries makes navigation simpler and increases overall usability. SOC managers can lean on these capabilities to educate new hires and help with onboarding in a security organization. We’ve seen our intelligent assistant elevate both novice and experienced users alike within SOC environments.

Alternately, what are some limitations of using a chatbot interface?

Many folks have probably experienced the limitations of chatbots from interacting with Alexa or Siri or automated tech support. When a bot doesn’t understand you it’s either funny or frustrating. In the domain of security, we have the added challenge of security jargon and trying to understand the difference between technical terms. To face this challenge, we have developed ways of integrating user and QA feedback. For example, our summer intern, Jiten Bhatt, put together a meta bot interface where we can test sentences and add new training data through a chatbot interface. This allows our bot to learn new phrasing and improve the models that perform natural language processing and understanding.

Another limitation is that plain English can’t express complex queries in a structured way. Humans invented programming languages for a reason. Specifying simple boolean logic or math can become cumbersome, if not ambiguous, in a concise sentence. For example, a user could describe events in plain English as “logins not from user admin or user root” which could be somewhat ambiguous to a listener or a bot, while “logins NOT (admin OR root)” has one clear semantic meaning. It becomes incumbent on us to simplify interactions appropriately for our conversational interface while still providing a programmatic interface for queries too complex for plain English.

Ultimately, how can conversational interfaces, such as chatbots, help companies build better defenses?

Chatbots can help companies build better defenses by empowering defenders. The folks that are on the front lines, so to speak, should be gaining expertise in the field not in the GUI of a specific version of some proprietary tool. We believe that companies’ defenses are centered around the people power they have and conversational interfaces can expand the talent pool beyond folks with tool-centric experience.

To be more concrete, right now adversaries are often sitting in corporate IT infrastructure for too long. If security teams can search more effectively through their infrastructure, they have a better chance of intercepting attackers earlier. We see folks diving into chat-based interfaces because they’re more intuitive, more fun, and more forgiving. This last part about forgiving interfaces can’t be emphasized enough, my earliest memories of computer programming were syntax errors. Who wants to have their first introduction to a tool be a cryptic error, with a conversational interface we’re able to take a much broader, more diverse set of input and produce the expected action. That’s really cool and helps make the tech more inclusive.

You’re giving a talk titled Security + Design * Data Science: A Bot Story at the O’Reilly Security Conference in New York this October. What other presentations are you looking forward to attending while there?

It’s always good to see security topics outside of the realm of hacking and intrusion getting covered and we think Danielle Leong’s talk on consensual software will shine a light on less hacker oriented domains of infosec and safety. As data scientists, we are very interested in Alex Pinto’s talk Towards a Threat Hunting Automation Maturity Model and Michael Roytman’s on Predicting Exploitability With Amazon Machine Learning. Both speakers are active in the Security Data Science community and their research is novel and accessible, so they should be great talks for all participants.

Post topics: Security