Cyber Security

I replaced manual pen testing with automation. Here’s what I learned.

Additional authorization and compliance requirements have also been added in response to cyber incidents. While these frameworks play an important role in establishing the foundations of security, true security is about more than just achieving perfect compliance scores. As I often say, “policies and procedures won’t stop an attacker, they’ll have a lot of documentation to release if they breach us.”

Examining how our facilities stand up to a serious threat is a real assurance of security posture. This is where the annual entrance exam comes in, and boards now want to see good results.

However, there are significant problems with manual entrance exams that I have encountered, especially if they are only done annually.

Speed, range, and the human bottleneck

The challenges of manual testing have become increasingly apparent as our environment grows more complex. All interactions were bound by time and budget, forcing difficult trade-offs about what to test and how deeply. The quality and overall understanding of the results varied greatly depending on which consultant we worked with, their individual expertise, their familiarity with emerging strategies, and how much they could accomplish within contract hours.

Traditional penetration testing delivered what I saw as a flawed value proposition. We will invest a significant budget to get a snapshot of our security posture a few weeks after the end of the test and since then it has started to age like milk. There was no continuous feedback loop, no continuous validation of our security controls. We weren’t going blind between the annual inspections, hoping our defenses remained effective as the threat landscape changed daily around us.

A black hole to fix

Perhaps the most frustrating is what happens after you get the results. Our teams will work diligently to implement fixes, but we rarely have the budget or opportunity to bring testers back to verify fixes. We were left with uncertainty. This gap between identification and authentication has created a dangerous blind spot in our security system.

Traditional vulnerability assessment relies heavily on CVSS severity scores that do not tell us how exploitable the vulnerability is in our particular environment or where it resides within a realistic attack path. We needed to understand what an attacker could accomplish by grouping the vulnerabilities together.

A better way forward

Frustrated by these limitations, I explored automated penetration testing, a category that includes breach and attack simulation (BAS) and the continuous automated red team (CART). Platforms like Pentera and Horizon3.ai’s NodeZero enable continuous, on-demand simulations using real-world attacker tactics, strategies, and processes.

They offer black-box testing (simulating external attackers), gray-box testing (simulating internal threats), and custom scenarios that target specific vulnerabilities such as ransomware or zero-day exploits.

Most importantly, they deliver results quickly, there are no weeks of waiting for reports, and they enable quick retesting to ensure correction.

Implementation and investment

We went from $35,000 for annual testing to $90,000 annually for an automated platform, delivering over $1.3 million worth of testing. Our cadence jumps from one test per year to as few as 38, with unlimited flexibility for additional simulations.

We have established a bi-weekly rhythm of black box and gray box testing, supplemented by monthly custom scenarios targeting specific concerns such as ransomware attacks. This gave our team two weeks to fix before re-testing the confirmed fix that worked. These tools test more in a day than human testers can in a week, quickly correcting findings and plugging in for deeper testing.

Unexpected lessons and group change

The platform brought insights that fundamentally changed our understanding. Take password security: we will use long passwords, we are sure that sentences with fourteen letters will increase the time of breaking from eight months to twelve billion years. The tool shattered that confidence, cracking a 23-character passphrase containing uppercase and lowercase letters, numbers, and special characters in less than half an hour. The lesson was humble, people predict. Attackers store lists of words and precompiled hash lists in rainbow tables that specifically target common phrases. The length of the passphrase is important, but the quality is more important.

The power of re-examination has proven a game changer. Security teams are able to identify problems, fix them, and quickly retest to make sure the fixes are working. The platform generated high-level presentation board reports and detailed technical reports for security teams to act quickly, not weeks later.

Perhaps most importantly, the platform raised the bar for our team. Until your team finds an automated penetration testing tool exploiting their environment, they won’t fully understand how to apply security concepts to their specific systems. Each simulated attack was fully documented, providing real-time learning opportunities. The teams started treating the field like a game they were determined to win.

Rethinking prioritization: methods of attack beyond difficulty scores

One of the most important revelations was how automated penetration testing was changing our vulnerability management. We found that critical vulnerabilities that receive immediate attention may be buried five layers deep in an attack path, while low-level vulnerabilities that we can ignore may be the first place attackers can exploit. In another revelation, the platform showed how seemingly low-level vulnerabilities can be harnessed to gain access to critical systems.

This changed our amendment strategy. Instead of addressing risk through CVSS severity ratings, we focused on what attackers might actually use to stop the attack. Given the very large number of risks that require constant attention, this intelligence about the actual methods of attack proved to be more important to allow us to focus limited resources where they will produce the greatest security effect rather than chasing critical points that did not reflect real-world risks.

The gap between configuration and reality

We put a lot of faith in our security tools when we enable a feature, assuming it works. The automated penetration testing platform delivered an important lesson: test your controls, don’t just trust the GUI.

I’ve seen this firsthand when we empower operations to mitigate certain risks. It looked perfect on the screen, but it wasn’t working. The platform tested various types of attacks, including the scenario it thought it would protect against. The attack was successful, the security tool’s features were not working due to a bug. We didn’t have the security we thought we had.

It reminds me of the defender’s dilemma: “Defenders must be right 100% of the time; attackers must correct only once.” I would much rather have our testing tools highlight these holes than have attackers find them.

Comprehensive validation: Assessing your findings and feedback

Another powerful application is verifying your optical devices and SOC. The first time I ran a proof of concept, I purposely did not inform our third party SOC. Our internal SIEM quickly generated many alerts. It took four hours for the external SOC to contact us – a lifetime in cybersecurity.

If you are paying for a third party service, verifying their feedback is very important and I strongly recommend doing at least one unannounced test. The results may surprise you, and it’s much better to find gaps during your test than during the actual event.

One last lesson: as your defensive fitness improves and you score higher and higher, you reach a plateau. Moving to a new automated penetration testing platform can yield new discoveries, as each tool takes different approaches, offering opportunities to continually improve rather than become complacent.

Verdict: Evolution, not extinction

Should you replace manual check-ins with automated fields? The answer is nuanced. To ensure continuous security, continuous improvement, and operational stability, automated testing should be your primary validation method. The ROI, learning opportunities, and continuous feedback loop far exceed what annual testing delivers.

However, I wouldn’t do away with manual testing. It’s still beneficial to bring in specialized human testers for complex custom applications, significant infrastructure changes, or when you need the kind of insight that only experienced security researchers can provide. Think of automated platforms as your daily training program, and manual testing as occasional special tests.

The real question is whether you can afford it not to receive continuous automatic verification. The gap between annual tests leaves you vulnerable 364 days a year. Automated penetration testing fills that void, transforms your team’s capabilities, and ensures your security posture is ongoing, not just once a year when auditors ask.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button