Skip to content

|

Lucas Linkowski

|

Defending global financial infrastructure against advanced threats.
Building the next generation of AI-powered security tools.

Lucas Linkowski — Information Security Professional

About Me

I am part of the Malware Defense operations team at a major global institution, Vice President at Global Information Security EMEA region. My daily work involves reverse engineering malware, building detection rules, threat hunting and tools at scale. I am interested in how GenAI tools transform security teams operate in the future.

I specialize in the full spectrum of malware analysis across Windows and Linux platforms. I've built and deployed YARA rules and EKFiddle detection rules to off network analysis and sandbox systems.Proactively processed thousands indicators of compromise. I research industry wide threats including ClickFix, ClearFake, and NPM supply chain attacks targeting public package repositories.

I believe that AI will fundamentally reshape cybersecurity both offensive tools and defensive challanges. Outside of my day to day work I've built MCP integrated tools connecting large language models to traffic analysis platforms. In my free time, I am preparing myself for AI augmented future. I am based in Wales, working from Chester, I bring the same intensity to mentoring analysts and sharing knowledge as I do to dissecting binaries.

Beyond the Screen

Passionate about Computer Science, Processor Architectures, Quantum Physics, Neuroscience, LLM design, GenAI technology stacks, and Assembly-Level Debugging. Inspired by the work of Brian Kernighan, Didier Stevens, Mark Russinovich, Eric Zimmermann, and Dennis Ritchie.

Core Expertise

01

Malware Reverse Engineering

Deep binary analysis across PE, ELF, Delphi, .NET, and VBA formats. Static analysis with PE headers and metadata tools. Dynamic analysis in isolated lab environments with API hooking and process monitoring. Full reverse engineering using IDA Pro, Ghidra, and x32/x64dbg.

Detection Engineering

Creator of EKFiddle regex rules and YARA rules. Automated conversion pipelines from Suricata and Snort to EKFiddle and YARA formats. Domain reputation extensions for off network analysis environments used by the global team.

GenAI Security Innovation

Built an MCP server integrating Fiddler traffic capture with Gemini LLM for natural language malware traffic analysis. Shadow AI detection research.

Proactive Threat Hunting

ClickFix, ClearFake, and Fake Update campaigns. VirusTotal YARA hunt rules. Built C2 extraction tooling for large scale intelligence processing.

Malware News

Daily intelligence on GenAI and LLM cybercrime trends.

What's new in GenAI/LLM cybercrime and abuse

1. LLM-enabled malware and AI as tradecraft is now showing up in mainstream threat reporting

CrowdStrike's 2026 Global Threat Report highlights adversaries embedding LLM capability into malware workflows including the LLM-enabled malware LAMEHUG attributed to Russia-nexus activity and broader exploitation of GenAI as an attack surface such as prompt-driven command generation and persistence paths in AI dev platforms. CrowdStrike

Trend Micro's criminal-AI coverage similarly frames 2025 to 2026 as a step-change in crime-as-a-service AI-as-a-multiplier spanning deepfakes, fraud enablement, and AI-augmented tooling. Trend Micro

2. Prompt injection and prompt abuse is being operationalized as a security problem

Microsoft published fresh guidance focused on detecting and analyzing prompt abuse, emphasizing prompt injection as a practical risk for tool-using AI systems including data access, instruction override, and unintended actions. Microsoft

Palo Alto Networks Unit 42 published new research on prompt fuzzing and guardrail bypass and the continued feasibility of jailbreak-style manipulations across LLM apps. Unit 42

3. GenAI breach and compromise stories keep coming, often as plain old misconfig and data exposure

A security researcher found Sears Home Services exposed large volumes of AI chatbot and assistant logs and audio transcripts in unsecured databases from 2024 to 2026, illustrating how AI support stacks can amplify privacy impact when storage and telemetry controls fail. WIRED

Malwarebytes reported a major exposure involving the Chat & Ask AI mobile app with approximately 300 million messages tied to approximately 25 million users per the report due to backend exposure and misconfiguration. Malwarebytes

4. Cybercriminal adoption trend: AI-assisted development is measurably worsening secrets sprawl

GitGuardian's 2026 State of Secrets Sprawl findings indicate 2025 saw over 29 million secrets leaked on GitHub, with AI-assisted code associated with higher leak rates and a notable increase in AI-service credential leakage. The write-up flags agent workflows and MCP-style configurations as compounding risk. TechRadar

5. Agent ecosystems are getting targeted and governments are starting to react

TechRadar reports Chinese authorities warning about office use of an autonomous agent tool OpenClaw due to risks like prompt injection, elevated permissions, and malware-laced fake versions circulating via developer platforms. TechRadar

Separate reporting highlights research where autonomous agents can drift into attack-like behavior in simulated enterprise settings, underscoring the agentic risk surface. TechRadar

Arsenal

Tools and platforms I work with daily.

Reverse Engineering

Ghidra IDA Pro x32/x64dbg CFF Explorer PE Studio Detect It Easy PEiD HxD

Network Analysis

CyberChef Fiddler Burp Suite Wireshark

Platforms

FlareVM REMnux Kali Linux

SIEM & EDR

CrowdStrike Falcon Splunk LogScale

Scripting

Python PowerShell JavaScript VBA Bash

Detection

EKFiddle YARA Suricata Snort VirusTotal URLScan NSX Defender

Thought Leadership

Beyond dissecting malware, I focus on building team capability at scale. I develop training documents, deliver webinar sessions, write coaching guides for detection rule creation, and document workflows that make entire teams more effective.

My current focus is the intersection of GenAI and cybersecurity operations. I've developed ideas for LLM adoption in security teams, built production ready AI tooling, and I'm working to bring AI augmented analysis from proof-of-concept to daily operational use. The security teams that adapt to AI-powered workflows will define the next era of cyber defense ahead.

I am an Individual Contributor, part of Malware Research Analysis team. My focus is to build the skills and capabilities of the security teams to be able to adapt to the changing landscape of cyber security and the use of GenAI tools to enhance their capabilities.

I am based in Wales, working from Chester UK, I bring the same intensity to mentoring analysts and sharing knowledge as I do to dissecting binaries. Outside of my day to day work I'm a certified drone RC pilot and photographer. I enjoy exploring the great british outdoors.

Let's Connect

Find me on these platforms.