The hacking community has been abuzz this week discussing how AI is going to change their jobs - or if it’s going to delete them entirely.
Do hackers have any future left? I'm sure we do.
We’ve seen this pattern happen before.
Developers.
Designers.
Marketers.
AI came, and news of massive disruption ensued. People were scared, panicked. And I get it: it’s understandable.
But in the end, what we are seeing is that AI is making them better at their job. Instead of being an adversary, it became a coworker. There’s room for both autonomous agents and copilot technologies.
However, this also happened because plenty of guides and tools were built to help integrate AI into the way they work. If you want AI to help you at work, you need to know how to use it.
This is precisely the topic of the talk we hosted this week at the C-DAYS 2025 conference. A workshop on how to use LLMs to hack better and faster, and thus better protect the internet.
And that’s why we’re publishing the full accompanying document and repository with examples on how to incorporate LLMs in your pipeline, from scraping to subdomain enumeration, payload generation and Autonomous Hacking.
Here are some highlights.
Vibe coding risks
Before discussing how AI can assist us, we must also be mindful of its risks. While AI tools boost productivity, they may also introduce new security challenges. The security community is no longer looking for common human errors but instead is looking for unforeseen consequences of AI-generated code. These applications may have vulnerabilities that may be incredibly simple and obvious to a trained eye. One real example was Harley’s Hacker Directory app. Harley mentions the dangers of prioritizing speed and “good vibes” over robust security, especially when using technologies like Supabase and PostgreSQL.
Here’s an example of Claude code generating a simple PHP application for viewing and uploading files:
If you look at the code, you may notice that path traversal is prevented via basename functions. Can you still exploit it? There is a shell waiting for you at https://ai4eh.ethiack.ninja.
Some vulnerabilities introduced by AI may be absurd. Models hallucinate and they lack context about authorization or comprehensive security practices. This simple example illustrates the dangers of prioritizing speed over robust security when using AI-generated code.
Reconnaissance
While we have identified some of the security risks associated with using AI tools for producing code, they can also be effectively leveraged to enhance security. For example, LLMs can be useful for generating wordlists that can be supplied to DNS brute force or to enrich permutations for subdomain enumeration via alterx. While prompt-based approaches work well for generating target-specific subdomains, and specialized tools like Subwiz use fine-tuned models for prediction, there are scenarios where traditional NLP techniques offer efficient and cost-effective solutions than general-purpose LLMs.
Instead of feeding the entire website content directly to an LLM to generate keywords, a possible approach uses NLTK to extract meaningful keywords first:
- Content Extraction: Fetch target subdomains and scrape their content
- Keyword Extraction: Use NLTK to tokenize, lemmatize, and filter relevant terms
- Find relevant keywords: Remove stop words, apply length filters, and rank by frequency
- LLM Enhancement: Combine extracted keywords with LLMs for enrichment
Here is an example:
Then, you can basically combine multiple tools to find subdomains:
Hackbots
Hackbots can be defined as “any automated system that uses AI in a meaningful way in order to find vulnerabilities”. Multiple organizations are researching this topic, including Ethiack. Hackbots can be applied to both CTF scenarios and real-world applications, automating repetitive tasks, scanning significant amounts of code, and identifying potential vulnerabilities at a much faster pace. However, human judgment remains critical as hackbots lack natural creativity and critical thinking.
The need for balance is clear: as AI generates more code with potential vulnerabilities, hackbots emerge as tools to detect these vulnerabilities to keep the global information network stable.
Ready to explore how AI is transforming Ethical Hacking? Check out the full document and repository for some introductory hands-on examples including:
- Reconnaissance & Discovery: Contextual subdomain enumeration, screenshot analysis, and content discovery
- Exploit Development: Automated vulnerability detection
- Hackbots: Using and extending open-source AI agents, leveraging Burp AI
- Integrations & Plugins: MCP servers for Burp Suite and Ghidra, Caido Shift Plugin and custom tool orchestration
- CTF Challenges: Simple scenarios to test your skills
If this is something you're into, you need to check out HackAIcon.
Until next time,
0xacb & Ethiack Research Team