Workshop on SOC Operations and Construction (WOSOC) 2024 Program
Friday, 1 March
-
Security Operation Centers (SOCs) are a common and critical piece of an organization’s cybersecurity strategy to prevent, monitor, detect, mitigate, and respond to cybersecurity incidents; but these aren’t the metrics a SOC analyst is measured against: they are measured against volume and time. This talk will discuss the current challenges SOC analysts face with alert fatigue against an ever-growing number of alerts and the need to manage the scale by scaling the analyst.
-
Christopher Rodman, Breanna Kraus, Justin Novak (SEI/CERT)
Organizations come in all shapes and sizes, serve myriad purposes, and exist in different security environments. But they all have one thing in common: they need security operations. How should an organization determine which services and functions its Security Operations Center (SOC) should provide? This paper identifies five factors that influence an organization’s SOC service priorities. It then describes a workflow that complements standard security frameworks to efficiently determine and prioritize the services that a SOC should perform for an organization. The services that the SOC offers should complement the organization’s overall cybersecurity program and align with higher level cybersecurity assessment frameworks, such as the National Institute of Standards and Technology Cybersecurity Framework. The workflow is repeatable and can be used regularly to evaluate whether SOC services continue to align with an organization’s priorities in a changing world. This work will interest those responsible for the design, coordination, and implementation of security operations teams in organizations of any size.
-
Eric Dull, Drew Walsh, Scott Riede (Deloitte and Touche)
Cyber has been the original big-data for decades. Since Denning and Neumann’s 1985 whitepaper on statistical analysis for intrusion detection systems1, cyber has seen the need for complex event processors to manage the scale of cyber data. Security Operations Centers (SOCs) have been successful in overcoming this challenge, as evidenced by the rise of behavioral analytics, supervised machine learning methods, training data sets, and the scaling of technology. This talk will describe the strategies used in successful automation, AI adoption, and implementation and offer a framework for engaging executives to help guide effective AI use in the broader organization outside of the SOC.
-
Cyber threat actors generally create branding content to promote their reputation. While threat actor branding content could include carders masquerading as hacktivists, for example, the reputational branding of cyber threat actors is generally considered to be a singular, symbolic display of their threat and capabilities. This presentation suggests that Security Operations Centers (SOC) and cyber threat intelligence communities could proactively collect unique forensic and observational behavioral threat information on threat actors by manipulating their reputational content, anticipating threat actors will respond or react behaviorally when their reputations are questioned or ridiculed publicly. This presentation is exploratory, recognizing that most accounts of manipulating cyber threat actor reputational content are anecdotal. This presentation proposes an integrated conceptual interpretation of the foundational theoretical frameworks that explain why and how people respond behaviorally to content made for them, applied in a context of influencing threat actors with generative artificial intelligence content.
-
Kumar Shashwat, Francis Hahn, Xinming Ou, Dmitry Goldgof, Jay Ligatti, Larrence Hall (University of South Florida), S. Raj Rajagoppalan (Resideo), Armin Ziaie Tabari (CipherArmor)
Large language models (LLM) are perceived to offer promising potentials for automating security tasks, such as those found in security operation centers (SOCs). As a first step towards evaluating this perceived potential, we investigate the use of LLMs in software pentesting, where the main task is to automatically identify software security vulnerabilities in source code. We hypothesize that an LLM-based AI agent can be improved over time for a specific security task as human operators interact with it. Such improvement can be made, as a first step, by engineering prompts fed to the LLM based on the responses produced, to include relevant contexts and structures so that the model provides more accurate results. Such engineering efforts become sustainable if the prompts that are engineered to produce better results on current tasks, also produce better results on future unknown tasks. To examine this hypothesis, we utilize the OWASP Benchmark Project 1.2 which contains 2,740 hand-crafted source code test cases containing various types of vulnerabilities. We divide the test cases into training and testing data, where we engineer the prompts based on the training data (only), and evaluate the final system on the testing data. We compare the AI agent’s performance on the testing data against the performance of the agent without the prompt engineering. We also compare the AI agent’s results against those from SonarQube, a widely used static code analyzer for security testing. We built and tested multiple versions of the AI agent using different off-the-shelf LLMs – Google’s Gemini-pro, as well as OpenAI’s GPT-3.5-Turbo and GPT-4-Turbo (with both chat completion and assistant APIs). The results show that using LLMs is a viable approach to build an AI agent for software pentesting that can improve through repeated use and prompt engineering.
Speakers: kc Claffy (CAIDA), M. Patrick Collins (USC-ISI), S. Raj Rajagoppalan (Resideo), Shawn McGhee
-
Alerts serve as the backbone of most SOCs, but alerts alone cannot detect modern, advanced threats without being so noisy that they quickly induce analyst fatigue. Threat hunting has arisen as a complement to alerting, but most SOCs do not operationalize threat hunting with the same rigor as alerting. In this session, we will discuss how SOC teams can overcome this through a model we call Continuous Threat Hunting: using analytic-driven methods to cover more data, but with a standardized approach designed to produce repeatability, effectiveness, and confidence in result.
-
Seth Hastings, Tyler Moore, Corey Bolger, Philip Schumway (University of Tulsa)
This paper presents a method for reduction and aggregation of raw authentication logs into user-experience focused "event logs". The event logs exclude non-interactive authentication data and capture critical aspects of the authentication experience to deliver a distilled representation of an authentication. This method is demonstrated using real data from a university, spanning three full semesters. Event construction is presented along with several examples to demonstrate the utility of event logs in the context of a Security Operations Center (SOC). Authentication success rates are shown to widely vary, with the bottom 5% of users failing more than one third of authentication events. A proactive SOC could utilize such data to assist struggling users. Event logs can also identify persistently locked out users. 2.5% of the population under study was locked out in a given week, indicating that interventions by SOC analysts to reinstate locked-out users could be manageable. A final application of event logs can identify problematic applications with above average authentication failure rates that spike periodically. It also identifies lapsed applications with no successful authentications, which account for over 50% of unique applications in our sample.
-
Matt Jansen, Rakesh Bobba, Dave Nevin (Oregon State University)
Provenance-based Intrusion Detection Systems (PIDS) are threat detection methods which utilize system provenance graphs as a medium for performing detection, as opposed to conventional log analysis and correlation techniques. Prior works have explored the creation of system provenance graphs from audit data, graph summarization and indexing techniques, as well as methods for utilizing graphs to perform attack detection and investigation. However, insufficient focus has been placed on the practical usage of PIDS for detection, from the perspective of end-user security analysts and detection engineers within a Security Operations Center (SOC). Specifically, for rule-based PIDS which depend on an underlying signature database of system provenance graphs representing attack behavior, prior work has not explored the creation process of these graph-based signatures or rules. In this work, we perform a user study to compare the difficulty associated with creating graph-based detection, as opposed to conventional log-based detection rules. Participants in the user study create both log and graph-based detection rules for attack scenarios of varying difficulty, and provide feedback of their usage experience after the scenarios have concluded. Through qualitative analysis we identify and explain various trends in both rule length and rule creation time. We additionally run the produced detection rules against the attacks described in the scenarios using open source tooling to compare the accuracy of the rules produced by the study participants. We observed that both log and graph-based methods resulted in high detection accuracy, while the graph-based creation process resulted in higher interpretability and low false positives as compared to log-based methods.
-
James Fitts, Chris Fennel (Walmart)
Red Team campaigns simulate real adversaries and provide real value to the organization by exposing vulnerable infrastructure and processes that need to be improved. The challenge is that as organizations scale in size, time between campaign retesting increases. This can lead to gaps in ensuring coverage and finding emerging issues. Automation and simulation of adversarial attacks can be created to address the scale problem. Collecting libraries of Tactics, Techniques and Procedures (TTPs) and testing them via adversarial emulation software. Unfortunately, automation lacks feedback and cannot analyze the data in real time with each test.
To address this problem, we introduce RAMPART (Repeated And Measured Post Access Red Teaming). RAMPART campaigns are very quick campaigns (1 day) meant to bridge the gap between the automation of Red Team simulations and full blown Red Team campaigns. The speed of these campaigns comes from pre-built playbooks backed by Cyber Threat Intelligence (CTI) research. This approach enables a level of freedom to make decisions based on the data the red team analyst sees from their tooling and allows testing further in the attack chain to test detections that could be missed otherwise.
-
Johnathan Wilkes, John Anny (Palo Alto Networks)
By embracing automation, organizations can transcend manual limitations to reduce mean time to response and address exposures consistently across their cybersecurity infrastructure. In the dynamic realm of cybersecurity, swiftly addressing externally discovered exposures is paramount, as each represents a ticking time bomb. A paradigm shift towards automation to enhance speed, efficiency, and uniformity in the remediation process is needed to answer the question, "You found the exposure, now what?". Traditional manual approaches are not only time-consuming but also prone to human error, underscoring the need for a comprehensive, automated solution. Acknowledging the diversity of exposures and the array of security tools, we will propose how to remediate common external exposures, such as open ports and dangling domains. The transformative nature of this shift is crucial, particularly in the context of multiple cloud platforms with distinct data enrichment and remediation capabilities.
-
Rakesh Bobba, Dave Nevin (Oregon State University)
ORTSOC - The Nation’s First Cybersecurity Teaching Hospital is the heart of a new academic program in Cybersecurity at Oregon State University. Adapted from the clinical rotation model long used by our nation’s teaching hospitals, ORTSOC provides experiential learning opportunities for students through a year-long program which is embedded into the curriculum. Guided by experienced professionals, students in the program hone their cybersecurity skills by providing managed cybersecurity services to a consortium of under-served organizations across the region. For students seeking to start careers in cybersecurity operations, the clinical program offers a carefully structured opportunity to build an experiential bridge between the classroom and professional practice. During this talk, the co-founders of the ORTSOC will share an overview of the program, present education and service goals, and discuss the pros and challenges of a SOC-based clinical rotations model for cybersecurity education.
-
The evolution of vulnerability markets and disclosure norms has increasingly conditioned vulnerability and vulnerability patching disclosures to audiences. A limited collection of studies in the past two decades has attempted to empirically examine the frequency and the nature of attacks or threat activity related to the type of vulnerability disclosure, generally finding that the frequency of attacks appeared to decrease after disclosure. This presentation proposes extraordinary disclosures of software removal to disrupt collection baselines, suggesting that disclosure of unnamed but topical enterprise software such as enterprise deception software could create a singular, unique period of collection to compare to baseline cyber threat activity. This disruptive collection event could provide cyber threat intelligence teams and SOCs greater visibility into the periodicity and behaviors of known and unknown threat actors targeting them. The extraordinary disclosure of the removal of enterprise software could suggest there are present vulnerabilities on networks, which could prompt increased cyber threat actor attention and focused threat activity, because there is uncertainty about the removal of the software and the replacement of software, depending on the perceived function and capability of that software. This presentation is exploratory, recognizing that there is perhaps anecdotal but generally limited understanding of how cyber threat actors would respond if an organization disclosed the removal of enterprise software to audiences. This presentation proposes an integrated conceptual interpretation of the foundational theoretical frameworks that explain why and how people respond behaviorally to risk and reward and anticipated regret, applied in a context of influencing threat actors with extraordinary disclosures of removal of enterprise software.