Stop using chat interfaces for your command line interfaces!
Artifical Intelligence
/ July 1, 2025 • 6 min read
Tags:
mcp
automation
tool calling
cybersecurity
workflow automation
You are running an Nmap scan on a target, you encounter an unfamiliar service, what do you do? Alt-tab to your preferred AI, query: “what is this service and is there any known vulnerabilities?”, wait for the response, and then you continue your vulnerability research.
This workflow is fundamentally broken. We’re using AI as a glorified command line when it should be our always-on research assistant, integrated directly into our security workspace."
Model Context Protocol (MCP)
Anthropic developed a protocol for defining a list of tools that an AI can execute, effectively giving AI increased capabilities beyond the chat interface. MCP servers can offer tools which MCP clients (and servers) can consume. This led to the inevitable, people created MCP servers offering all types of functionality (tools). My personal opinion is that MCP adds unnecessary complexity to most projects - but that’s a blog post for another time.
Colleagues in cybersecurity started using AI chat interfaces to interact with their terminal-based tools. AGI was proclaimed and everyone became AI experts. Well, perhaps I exaggerate a bit with the expert part (or maybe not), but my understanding of the public discourse was that MCP + security tooling + AI chat interface = innovation
. I fundamentally disagree with that conclusion, and for the remaining article, I will elaborate on why I believe it is actually the opposite.
The Problem with AI Chat Interfaces for Security Workflows
I believe people are misusing both MCP services and AI when integrating with their cybersecurity tools, or perhaps tools in general, but specifically cybersecurity tools. Instead of using traditional interfaces like the command line, people are now turning to MCP servers and tools like Claude or ChatGPT to control their tooling. They write natural language prompts, which are interpreted and mapped to the appropriate commands, often with added analysis, formatting, or summarization.
But for cybersecurity tools, wouldn’t it be better if, instead of using chat interfaces like Claude or ChatGPT as a new interface - why not use AI as an assistant within the workflow and workspace? So you execute your tools as normal. For example, when you’re doing large-scale recon, you may find technologies that you don’t know about. Of course, you can switch windows to your AI and write your query, or you simply verbalize your query and the AI performs the research. The key difference: the AI could be listening and be connected to your workspace, so whatever you ask it to do, it can save the results to your workspace.
Workflow Example
For example, if you’re doing recon where you encounter something new and need to do some scripting, or need to extract, transform, and load something, then you simply ask the AI to do that and give it the data. It can write the script, run the script in a sandbox, and then take the result and put it in your workspace. This could be done in a few ways, for example, your AI could be integrated as a menu when you right-click an item in your workspace. Consider a file containing lots of URLs, what if you just right-click the file, choose “Query AI” which pops up a prompt window where you can issue your command. The AI completes the task and saves the result directly to your workspace. A notification can even be sent via the notification area in your OS.
Voice Integration
Voice integration takes this further, if you can directly issue commands via speech-to-text: you never leave your workspace and the flow keeps flowing. This would require tighter OS integration, but the results would be magnificent. For example, if the AI knows what files or text you have selected, or what tools you are running, the more integrated the AI becomes and the more context it has which it can use to assist you.
If you say, “Okay AI, I just added a new security finding to the list” then the AI understands the context and can say, “Oh great, I can format it and add it to the report. Do you want me to fix that for you?” - Seamlessly, the AI runs a background task to update the report, while you continue working.
If you had access to a vulnerability database, and you notice e.g. WordPress 3.4 in your recon data, you simply ask your AI “hey, found WordPress 3.4, can you return all CVEs for that version”. The AI would immediately return the information to your workspace, or add it to your report. Your assistant is always there, ready to receive instructions, and always connected.
The following table compares the original workflow with the integrated assistant workflow described above.
Traditional Chat UI | Integrated AI Assistant |
---|---|
Alt-tab to chat | Stay in workspace |
Type commands | Use voice naturally |
Manual copying | Auto-saves results |
Table 1: Comparison between the traditional interface and an integrated assistant
Why This Matters
That is the goldmine here. That is what we want to achieve.
I think that is the direction we should go in cybersecurity, and not simply use the chat interface as an interface for our tooling. Instead, we should actually use it as an assistant, preferably through speech-to-text. Using voice is much more natural than alt-tabbing to ChatGPT or Claude and typing out a formal command. Voice preserves the flow state that chat interfaces destroy.
Sure, you could build MCP tools that save to your workspace. But you’re still typing commands in a chat window instead of having AI seamlessly integrated into your actual workflow. Additionally, voice transforms AI from a tool you go to into an assistant that’s with you. It’s the difference between:
- Stop → Type → Wait → Resume
- Just speak while working → Get results in your workspace
Security Considerations
Of course, giving AI unconditional access to your workspace, files and commands can introduce vulnerabilities. Therefore, it is vital to design a system that provides the AI with least amount of access and privileges. Dangerous actions must be explicitly allowed.
Access Control:
- Implement read-only access by default
- Require explicit confirmation for write operations
- Sandbox all script execution
- Use separate credentials for AI operations
Audit Trail:
- Log all AI actions and queries
- Track file access and modifications
- Monitor for unusual patterns
Network Isolation:
- Keep AI operations on isolated VLANs
- Prevent direct internet access from AI tools
- Use proxy servers for external queries
Avoiding Hallucination
The AI will most likely hallucinate if it can’t answer a question, therefore it is necessary to provide the AI with relevant capabilities to complete a task. For example, if you want to query CVE databases, provide the AI with search tools. Facilitate as much as possible for the AI to reduce hallucinations and false positives. A successful assistant will have access to as much data as possible, with the capability to search and extract information.
Conclusion
I think my main issue here is that people are using chat interfaces as an interface for their tooling, which is puzzling - why are we simply substituting one interface for another? That doesn’t make any sense to me. Technically, we just run commands. If you have a pipeline, you already have a state machine. Adding a chat layer on top of that isn’t intelligent integration, it’s redundancy.
So next time you’re about to type a command into ChatGPT, ask yourself: why am I using this as an interface instead of having it work alongside me? Your workflow is waiting for an upgrade.
[end of ramblings]
CONTENTS