This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
SOC-CMM v2 - input requested
I’ve recently written an article called A modern monitoring and response model. I would like to take some of the insights from that article and embed them into the SOC-CMM. More concretely, I’m considering the following changes to the SOC-CMM:
  • Integrating enhancements from the SOC-CMM for CERT.
  • Extending the use case management aspect to include visibility and emphasize validation of security monitoring rules
  • Adding EDR to the technology domain
  • Rewriting ‘analytics’ to ‘network traffic analytics’ and consolidating the IDPS technology. Together with the previous bullet, this means the technology domain is built up from the SOC visibility triad coined by Anton Chuvakin, augmented with SOAR as a major driver for SOC efficiency.
  • Adding purple teaming / red teaming to the services domain
  • Simplifying security incident response, as the SOC-CMM for CERT provides a more detailed assessment.
I’ve become somewhat hesitant to extend the SOC-CMM much further, as it will make assessments even bigger and more time-consuming. Basically, it is big enough as it is. This is why I’m also considering removing the ‘log management’ service from the services domain, and include some of the log management aspects into the security monitoring service.

Please leave your suggestions, comments and thoughts as a reply to this post. I am planning to start the work in August, so you have until then to post your ideas.
I think your line of thinking makes total sense and it is very close to what we are building internally.
One aspect which I have on my list, and which might be something to cosider for yor model was well is the "adversarial simulation/emulation" topic.

Beside the purple and red team parts we are currently trying to establish a more automated approach to validate the detection scenarios.
We are trying to use a threat intelligence informed approach where intelligence will inform the use cases but also the test scenarios to test those use cases.
While doing so it is not only important to test the actual detection capability but also the coverage. For example knowing the setup can reliably detect a technique or procedure is a important start, one should also consider the complexity of global systems an continously emulate these techniques globally within the environment to ensure that the logs ar collected in the right way, forwarded as it should and the correct use cases are in place.

I think these sort of automated "unit tests" for detection in combination with implicit coverage testsare something that can be added to the overall model - basically automating the purple teaming to a certain extent using SOAR-like tools and following the same overall principles.
Just as an idea.

I would also agree to exclude the log management services and include some of those into the security monitoring service

Solid ideas for minor changes.  Personally, I'd keep the Log Management section.  It's a good reminder to be sure all your sources are covered.
I would suggest to replace CERT (where is used, if anywhere) with CSIRT. 
Motivation: CERT is trademarked, and new team usually are called CSIRTs (ENISA uses and promotes such naming) or CIRTs (ITU develops under such name national CSIRTs, other places it is seen so too).


I think all those changes are really positive and reflect what a SOC should have nowadays.

Thanks a lot for putting this together for the community, this is a very valuable work that helps SOCs tremendously.

A couple of elements I miss, maybe because I haven't been able to find them:

1. Case Management (as a Process and as Technology as well) some people use ITSM as technology but don't have a Case Management process. From my perspective, this is crucial as the Case Management element needs to deal with Events, Incidents, Enriched data from Threat Intel, Issues, Requests, among others.
2. Reactive Monitoring and Proactive Monitoring connection (as a Process). When those are disconnected, Reactive Monitoring becomes totally static and theoretical and Proactive Monitoring becomes a purely firefighting element and consumes loads of resources as the intelligence acquired is not available to be reused to detect those new threats through Alert Monitoring.

Best regards

Hi Rob,

First of all, great work on the assessment, I have started using this in the beginning of this year. I would like to help you give shape to some functional changes.

Some things that come to mind:
- Current assessment is mostly inhouse SOC related, for example we are an MSP delivery SOC services, would be great to have more focus on that side as well. As an external SOC there are some things that might be different like a DAP or connecting to customers processes/case management systems. Escalations etc.

- The assessment is huge for starting/startup SOC's it would be great to give a helping hand where to focus first, how to have an organic rise in maturity, which policy and process first etc.
Hi Rob and community.
I have worked on the fist version and it helped me a lot while I was assessing clients’ maturity. But since now there is more focus on the next generation SOC, I would suggest adding the features missing in the 1st version like orchestration.and a category which will evaluate the integration of threat intelligence and hunting within SOC functions.

Also I would like to discuss wither if I am allowed to add controls imposed by local autheriities to benchmark them with controls used in V1 or V2 when it is published.

Please keep up the good work

Forum Jump:

Users browsing this thread: 1 Guest(s)