Better Together: Combining Automation and Manual Testing

Table of contents
When I started working in mobile application security in 2018, most testing was still largely manual. Since then, the ecosystem has exploded with scanners, frameworks, and automation platforms. With more tools available, this always raises the question of automated testing. “Automate all the things” can be scary for testers because it could make our jobs obsolete, and AI has become the center of attention.
So, when it comes to mobile application penetration testing, what actually finds vulnerabilities: manual testing or automation? This post breaks down what automated mobile testing does well and where it can improve. I will also discuss why manual testing still provides value in discovering high-impact vulnerabilities.
Automation
First, we will discuss automation. Automation promises a streamlined process end-to-end, reduces costs, and ensures consistency and coverage. Different types of automated mobile tests include static scanners for Android and iOS application applications, Mobile Dynamic application security testing (DAST) tools and proxy-based scanning and API security scanners tied to mobile back-ends.
Automated testing finds known vulnerability patterns well. The following issues are considered pattern-based, repeatable, and detectable at scale.
- Insecure cryptographic implementations
- Hardcoded secrets and API keys
- Insecure local storage usage
- Missing certificate pinning
- Misconfigured app permissions
Automation can also be good for application hygiene, preventing application regression by catching reintroduced vulnerabilities, enforcing secure coding baselines, and flagging configuration drift across releases. Lastly, automation can help with API testing by enumerating endpoints, identifying missing authentication, and detecting obvious injection flaws.
Despite these strengths, automation can miss vulnerabilities attackers can exploit, such as business logic and authorization flaws. Automation struggles here because these flaws require context and/or human interaction, not signature-based detections. Some examples of these flaws are unauthorized data exposure, abuse of legitimate app workflows, and privilege escalation through the attacker performing actions out of the expected workflow.
Chained attacks can also go unnoticed because the scanner will only focus on a single item whereas an experienced pentester would chain multiple attacks together. Some chained attacks include reverse engineering, client-side manipulation, API abuse and logic flaws.
Pentesters can adapt to specific application defenses that help prevent abuse, while automated tools may fail if the application uses code obfuscation, root/jailbreak detection, or runtime integrity checks.
Manual Testing
Automation can provide coverage, speed, and consistency that manual testing cannot, but manual testing provides expertise in context, creativity, and business logic understanding.
While manual testing is slower, it can be more effective at finding high-risk issues by employing human logic and understanding. A tester may ask questions like:
- What is this app trying to prevent?
- What happens if I do this out of order?
- What assumptions does the back end server make?
These are questions a scanner may not be capable of doing. Reverse engineering is a skill I execute in all mobile pentests that I perform. Some reverse engineering steps include:
- Decompiling or disassembling the code
- Performing static analysis
- Looking for hard-coded secrets
- Potential security flaws
- Conducting dynamic analysis
- Observer traffic
- Intercept functions
- Understanding how security controls and applications actually work
Often testers take advantage of design failures in the mobile application such as testing the functionality of the application and abusing it to find new vulnerabilities. This can look like abusing refunds, credits, and promotions; escalating roles through workflow misuse; and accessing other users’ data without exploitation.
Lastly, manual testing tends to identify the highest-impact findings such as authorization bypasses, business logic abuse, excessive data exposure via APIs, and insecure back-end assumptions about the client.
Manual testing is sometimes seen as slower and more costly in fast-paced development environments, but that’s largely because it relies on skilled professionals who bring valuable expertise. It allows testers to dig deeper, think creatively, and use a variety of tools to uncover issues that automated scans might miss. While it can take more time and each tester may approach things a bit differently, this often results in more thorough coverage and stronger security overall. When combined with automation, manual testing helps create a more well-rounded and effective testing strategy.
Combining Automation and Manual Testing
The most mature mobile security programs do not choose between manual and automated testing; they use both strategically. Use automation is for continuous testing, baseline testing and large-scale surface level findings. As for manual testing, the use is more in line with pre-release and major feature reviews high-risk apps and sensitive data flows and validating real-world findings.
For security leaders, it's important to understand that tool coverage does not equal security coverage, passing automated scans does not mean your app is safe, and reducing manual testing increases breach risk.
Automated tools have an important place in mobile security testing because they scale well, provide quick feedback, and catch known issues efficiently. But the most impactful mobile vulnerabilities rarely look like checklist items or signature matches. They emerge from how an application behaves, how trust is established, and where assumptions quietly break down.
The flaws that matter most are found through understanding the application, experimenting with edge cases, and applying logical analysis, not by running another scan. Automation optimizes for speed and coverage, while manual testing optimizes for insight. A mature mobile security program doesn’t choose one over the other; it uses automation to move fast and manual testing to make sure it’s moving in the right direction.
Building a well-rounded testing strategy is an evolving process. If you want assistance, get in touch with us!