I recently sat down with Andrea Little Limbago, chief social scientist at Endgame, to discuss how social scientists can help with an organization’s overall security strategy, tips for integrating data scientists into security teams, and thoughts on improving human-machine interactions. Here are some highlights from our talk.
1. What is your background as a social scientist, and how did you end up working in security?
I entered security via the national security path. I have a PhD in political science, focusing on international relations and quantitative methodologies. My research largely focused on the intersection of conflict and economics, exploring those economic factors that make two groups more (or less) conflict prone.
After teaching briefly, I joined a small Department of Defense analytic organization called the Joint Warfare Analysis Center as a computational social scientist where I applied my background in international relations to a national security context, translating many of the theories and insights into operational projects. As part of that position, I worked both in research and development and operations, which is where I first starting working side-by-side with engineers, providing data visualization and analytic workflow requirements. Since that time, I’ve wanted to work on the applied end of international relations, crafting analytic interfaces that are useful to more than just engineers, while also continuing to research and contribute to various national security issues.
When Endgame reached out to me, I already was aware that cybersecurity would be one of the most pressing national security challenges of our time, so I was eager to get involved. I also hadn’t seen many political scientists working in this area—and there still aren’t many today—so I was excited to provide a new perspective on the geopolitical theories and concepts shaping security, which is in such a nascent stage when it comes to discussions on really essential topics, such as governance, deterrence, norms for example.
2. Can you summarize a few studies that are relevant to improving human-machine interactions?
This is a fairly broad field, ranging from autonomous cars to augmented reality to interface design. To overgeneralize, it largely falls along the spectrum of the machines taking over, with the dystopia of AI replacing humans. This is by far the extreme, although there is also the other extreme of going off the grid, with some people allegedly returning to typewriters after a round of breaches. But for the most part, research points to the need to move beyond our organizational structures that were well situated for the industrial age, but ill-suited for the digital age.
The Second Machine Age is a good example highlighting just how much technology has and will continue to impact our daily lives, but it also points to the necessity to optimize the interaction with humans. This will require an interdisciplinary approach to development, blending insights from everything from psychology to engineering to mathematics to optimize the human experience and capabilities. This also bleeds over into organizational culture as well, which remains stuck in mid-20th-century frameworks. This is evolving and varies greatly by sector, but in areas like tech, these interactions will continue to enable remote, distributed teams, impacting work-life balance if for no other reason than shortening commuting times, which over time will greatly impact demographic trends. Within the workplace, there is increasing focus on the use of both interface design to augment human capabilities, crafting much more interactive and intuitive capabilities that previously were inaccessible to most users. As for the latest hype, many point to vocal recognition, specifically with a rise in studies and application in the realm of bots. Many view this as a huge time saver and aggregator, while others deem it to be the latest paradigm shift that may not live up to expectations. It’s still way too early to tell, but it definitely is worth keeping an eye on.
Interestingly, there also is research pointing to humans having greater trust in their computers than in humans, which can lend credence to a future where everyone stares at screens even more than they do today. Conversely, this also could show greater hope that human-computer interaction can improve safety and limit human error in anything from data analysis to car crashes. But again, much of this still requires those aspects that humans do best—critical thinking and interpretation, qualitative analysis and contextualization—while leaving the arduous data munging and rote processes to the machines. A great example of this was an analysis done on Tweets during Hurricane Sandy. If responders had focused solely on an algorithm of the data, it would have seemed that Manhattan was the center of gravity due to the population center and continued access to power. Those places needing the most help were the missing data, and so context will always matter when it comes to these big data solutions.
3. How can social scientists help with an organization's overall security strategy?
Social scientists—regardless of their discipline—adopt a very human-centric yet scientifically-driven view. Security largely adopts the opposite perspective and is very much focused on technology. As many studies point out, human-centric vulnerabilities remain the most challenging for organizations. Social scientists can help organizations harden their human defenses against the risks of many of the clickbait approaches, such as spear phishing, adware, ransomware and so forth. Many of the social network analysis models are also useful for insider threat detection. In addition, those with an international relations background are expertly positioned to inform organizations about the security threat landscape and shape risk assessments, while also helping organizations understand why they might be targets and what kind of external/global events should garner heightened vigilance. These kinds of positions are becoming more common at some of the major tech companies.
Helping with the requirements and QA for interface design is another area where social scientists, especially those familiar with quantitative analytics, could help those companies producing security products. This also ties directly back to the pipeline challenge. If more of the security products were made accessible to a larger range of analysts through a more intuitive and functional interface, then more people could move into security from adjoining fields.
Also, depending on their background and expertise, social scientists can provide insight into addressing many of the industry’s workforce challenges, especially as they pertain to shaping a more inclusive and diverse workplace environment. It’s amazing how many solutions in this area ignore many of the key insights on building culture and implement policies orthogonal to inclusivity.
Finally, the security industry doesn’t do the best job explaining technical aspects to a non-technical audience. This is important for a security strategy both internally and externally in order to provide the “so what” that is so often lost among the technical jargon. This can help the c-suite and advisory boards better understand the risks and recommendations, while for security companies, it also can help potential customers or partners better understand the capabilities. This is the magic of translating the security expertise to make it more accessible to a broader audience, which is desperately needed these days.
4. What approaches have you seen that better integrate data scientists with security teams?
Based on both failures and successes that I have seen, there are two concrete things organizations can do. First, you need more than one data scientist. I know of several instances where a company hired a lone data scientist. This rarely succeeds, even on small teams, because integrating a more quantitative approach requires a change in mindset. It is extremely difficult to come in as the only data scientist, change the culture, and also receive the kind of work that best leverages your expertise.
Second, data scientists must not be treated like data monkeys. They require challenging projects and shouldn’t be leaned on to solely clean up excel spreadsheets. Many companies hire data scientists because they know they need one, but aren’t sure what to actually do with them. When integrating data scientists with security teams, there should be clear objectives and challenging projects. Organizationally, it may be difficult at first, but it works really well to place data scientists on the same project as domain experts, like malware researchers or vulnerability prevention SMEs [subject matter experts]. It may seem intuitive, but physical location matters as well. Don’t stovepipe the data scientists off to their own section of the office, but actually place them alongside these domain experts. This can lead to those serendipitous, interdisciplinary interactions that spark innovation.
5. You recently spoke at the O’Reilly Security Conference in New York. How would you describe the event to someone who wasn't able to attend?
I have been to a lot of InfoSec conferences, and the O’Reilly Security Conference stood out for its unique blend of academics and practitioners who are focused on the defensive mission. The keynotes provided a range of perspectives on some of the major debates in security and privacy. At its core, the Security Conference is extremely multi-disciplinary, and focused much more on the socio-technical aspect of security, with everything from user experience to behavioral analytics to artificial intelligence. Moreover, the conference had an extremely open, inclusive and supportive atmosphere. It combined the academic rigor and thirst for innovative solutions with a practitioner’s emphasis on operational relevance and defensive mission. I hope the conference remains true to these roots, as it definitely fills a niche within the community.