Skip to content

|

Lucas Linkowski

|

Defending global financial infrastructure against advanced threats.
Building the next generation of AI-powered security tools.

Lucas Linkowski — Information Security Professional

About Me

I am part of a Malware Defense operations team at a major global institution. My day to day work involves reverse engineering malware, building detection rules, threat hunting, and creating automation. I am particularly interested in how GenAI tools will reshape security operations over time. In my personal time, I follow the latest developments in GenAI and LLM technology, along with the threats emerging around them.

I specialise in malware analysis across both Windows and Linux environments. I have built and deployed YARA rules and EKFiddle detection rules for off network analysis and sandbox systems, and I have proactively processed thousands of indicators of compromise. I also research broader industry threats such as ClickFix, ClearFake, and npm supply chain attacks affecting public package repositories.

I believe AI will reshape cybersecurity, both in offensive capability and in the defensive challenges organisations will face. Outside my day to day work, I have built MCP integrated tools that connect large language models to traffic analysis platforms. In my own time, I am preparing for an AI augmented future. Based in Wales and working from Chester, I bring the same intensity to mentoring analysts and sharing knowledge as I do to dissecting binaries.

Independent personal website. Views, writing, code, and research are my own and do not represent any employer.

Beyond the Screen

I am interested in computer science, processor architecture, quantum physics, neuroscience, LLM design, GenAI technology stacks, and assembly level debugging. I am particularly inspired by the work of Brian Kernighan, Mark Russinovich, Eric Zimmermann, Didier Stevens, and Dennis Ritchie.

Core Expertise

01

Malware Reverse Engineering

Deep binary analysis across PE, ELF, Delphi, .NET, and VBA formats. Static analysis with PE headers and metadata tools. Dynamic analysis in isolated lab environments with API hooking and process monitoring. Full reverse engineering using IDA Pro, Ghidra, and x32/x64dbg.

Detection Engineering

Creator of EKFiddle regex rules and YARA rules. Automated conversion pipelines from Suricata and Snort to EKFiddle and YARA formats. Created Domain Reputation extensions for Fiddler Classic used in off network analysis.

GenAI Security Innovation

Personal project in my own time: Built MCP server integrating Fiddler traffic capture with Gemini LLM for natural language malware traffic analysis. Shadow AI detection research.

Proactive Threat Hunting

ClickFix, ClearFake, and Fake Update campaigns. VirusTotal YARA hunt rules. Built C2 extraction tooling for large scale intelligence processing.

Malware News

Daily intelligence on GenAI and LLM cybercrime trends.

What's new in GenAI/LLM cybercrime and abuse

1. LLM-enabled malware and AI as tradecraft is now showing up in mainstream threat reporting

CrowdStrike's 2026 Global Threat Report highlights adversaries embedding LLM capability into malware workflows including the LLM-enabled malware LAMEHUG attributed to Russia-nexus activity and broader exploitation of GenAI as an attack surface such as prompt-driven command generation and persistence paths in AI dev platforms. CrowdStrike

Trend Micro's criminal-AI coverage similarly frames 2025 to 2026 as a step-change in crime-as-a-service AI-as-a-multiplier spanning deepfakes, fraud enablement, and AI-augmented tooling. Trend Micro

2. Prompt injection and prompt abuse is being operationalized as a security problem

Microsoft published fresh guidance focused on detecting and analyzing prompt abuse, emphasizing prompt injection as a practical risk for tool-using AI systems including data access, instruction override, and unintended actions. Microsoft

Palo Alto Networks Unit 42 published new research on prompt fuzzing and guardrail bypass and the continued feasibility of jailbreak-style manipulations across LLM apps. Unit 42

3. GenAI breach and compromise stories keep coming, often as plain old misconfig and data exposure

A security researcher found Sears Home Services exposed large volumes of AI chatbot and assistant logs and audio transcripts in unsecured databases from 2024 to 2026, illustrating how AI support stacks can amplify privacy impact when storage and telemetry controls fail. WIRED

Malwarebytes reported a major exposure involving the Chat & Ask AI mobile app with approximately 300 million messages tied to approximately 25 million users per the report due to backend exposure and misconfiguration. Malwarebytes

4. Cybercriminal adoption trend: AI-assisted development is measurably worsening secrets sprawl

GitGuardian's 2026 State of Secrets Sprawl findings indicate 2025 saw over 29 million secrets leaked on GitHub, with AI-assisted code associated with higher leak rates and a notable increase in AI-service credential leakage. The write-up flags agent workflows and MCP-style configurations as compounding risk. TechRadar

5. Agent ecosystems are getting targeted and governments are starting to react

TechRadar reports Chinese authorities warning about office use of an autonomous agent tool OpenClaw due to risks like prompt injection, elevated permissions, and malware-laced fake versions circulating via developer platforms. TechRadar

Separate reporting highlights research where autonomous agents can drift into attack-like behavior in simulated enterprise settings, underscoring the agentic risk surface. TechRadar

Arsenal

Tools and platforms I work with daily.

Reverse Engineering

Ghidra IDA Pro x32/x64dbg CFF Explorer PE Studio Detect It Easy PEiD HxD

Network Analysis

CyberChef Fiddler EKFiddle Wireshark

Platforms

FlareVM REMnux VMWare Fusion Copilot Studio GitHub Copilot

SIEM & EDR

CrowdStrike Splunk LogScale

Scripting

Python PowerShell JavaScript VBA Bash

Detection

EKFiddle YARA Suricata Snort VirusTotal URLScan

Thought Leadership

Beyond dissecting malware, I focus on building team capability at scale. I develop training documents, deliver webinar sessions, write guides for detection rules, and document workflows that make entire teams more effective.

My current focus is the intersection of GenAI and cybersecurity operations. I've developed ideas for LLM adoption in security teams, built production ready AI tooling, and I'm working to bring AI augmented analysis from proof-of-concept to daily operational use. The security teams that adapt to AI-powered workflows will define the next era of cyber defense ahead.

I am an Individual Contributor, part of Malware Research Analysis team. My focus is to build the skills and capabilities of the security teams to adapt to the changing landscape of cyber security in the age of GenAI tools to enhance their capabilities.

I am based in Wales, working from Chester UK, I bring the same intensity to mentoring analysts and sharing knowledge as I do to dissecting binaries. Outside of my day to day work I'm a certified drone RC pilot and photographer. I enjoy exploring the great british outdoors.

Let's Connect

Find me on these platforms.

If you find the site useful, you can support me.