Back to Blog

Ransomware Hit. What Now?

A step-by-step ransomware response guide: what to do in the first 15 minutes, who to call, how to recover, and what NOT to do.

Ransomware Incident Response Disaster Recovery

Files are encrypted. There's a ransom note on the domain controller. People are panicking. Phones are ringing. Nobody knows what to do first.

This is the guide for that moment. Step by step — what to do, what NOT to do, who to call, and how to get through it. Everything here is based on real-world incident response, not theory.

Ransomware Response guide overview
This guide is also available as a slide deck for video generation.

The First 15 Minutes

Everything you do in the next 15 minutes determines how the next 24 days go. The average ransomware recovery takes 24 days. Your actions right now decide whether you're on the short end or the long end of that average.

The first 15 minutes — four timed actions
Four timed actions: Isolate, Call, Snapshot, Scope.
1
ISOLATE — disconnect affected systems (0-2 minutes)
Pull the network cable. Disable the switch port. Disconnect from Wi-Fi. Do NOT power off the machines. Isolation stops the ransomware from spreading to more systems. Powering off destroys forensic evidence stored in memory — encryption keys, process lists, active network connections. Your IR team needs that evidence. Isolate the network, leave the systems running.
2
CALL — Incident Commander + Security Lead (2-5 minutes)
Both. Simultaneously. At 2 AM if that's when this is happening. Use the pre-defined call tree from your runbook. If you don't have one, call whoever has the authority to make decisions and write checks. This is now a cross-functional incident — not an IT problem.
3
SNAPSHOT — preserve evidence before touching anything (5-10 minutes)
Snapshot every affected VM. Export system logs. Screenshot the ransom note — the exact wording, the payment address, the deadline, the threat actor's communication channel. Your IR team, your cyber insurance provider, and law enforcement all need this evidence. If you remediate before preserving, you've destroyed your ability to investigate and your insurance claim gets significantly harder.
4
SCOPE — how big is this? (10-15 minutes)
Which systems are encrypted? Which are still clean? Is the encryption actively spreading? And the most critical question: check your backup infrastructure IMMEDIATELY. Is the Veeam server compromised? Is the hardened Linux repo intact? Can you still access S3 Object Lock buckets? If the attacker got your backups, the entire recovery calculus changes.

What NOT to Do

Under pressure, people do the wrong thing fast. Here are the six mistakes that make a ransomware incident worse:

Six things NOT to do during ransomware
Six critical mistakes that make ransomware incidents worse.
🚫
DON'T power off encrypted systems
This destroys forensic evidence in memory — encryption keys, process lists, network connections. Isolate the network connection instead. Leave the system running.
🚫
DON'T immediately start restoring from backups
You don't know if the backups are clean. The attacker may have been inside for weeks, corrupting data that was then faithfully backed up. Restoring a compromised backup reinfects your environment.
🚫
DON'T trust your replication
vSphere Replication, AWS cross-region — they did exactly what they were designed to do: faithfully copy your data to the DR site. Including the encrypted data. Your replicated copy IS the ransomware. Immutable backups are your recovery path.
🚫
DON'T contact the attacker without counsel
Do not respond to the ransom note, negotiate, or pay without legal counsel and your insurance provider involved. There are legal implications to communication with threat actors, and your policy likely has specific requirements about negotiation.
🚫
DON'T announce it publicly before you're ready
External communications require coordination with legal, PR, and your insurance provider. A premature announcement creates panic, regulatory exposure, and may complicate ongoing investigation or negotiation.
🚫
DON'T blame anyone
Blame kills cooperation. The employee who clicked the phishing email is not the problem — the lack of MFA, the lack of email filtering, the lack of network segmentation is. Focus all energy on response. Post-incident review handles root cause later.

Hour 1: Contain & Assess

The initial 15 minutes are past. Now you contain and assess systematically:

  1. Confirm isolation is holding. No new systems should be getting encrypted. Monitor for C2 callbacks and lateral movement.
  2. Activate the full incident response team. IC, Security Lead, VMware Admin, Network Admin, Legal, Insurance contact, Comms lead. Open a dedicated bridge call.
  3. Assess the blast radius. How many systems? Which tiers? Which business functions are down?
  4. Check backup infrastructure status. Is the backup server compromised? Is the hardened repo intact? Can you access immutable copies?
  5. Establish the timeline. When was the first indicator? What was the attack vector? How long has the attacker been inside? This determines your clean RPO.
  6. Notify your cyber insurance provider. Most policies require notification within 24-72 hours. They'll assign an IR firm, legal counsel, and potentially a negotiator.

The most critical assessment: check your backup infrastructure. If the Veeam console shows a ransom note too, and you don't have a hardened Linux repo or S3 Object Lock — the room goes very quiet. That silence is the difference between "we recover from backups" and "we're discussing the ransom."

The Backup Question

This is the moment that determines everything. Three possible outcomes based on one architectural decision you made months ago:

The backup question — best, middle, and worst case
Best case (immutable), middle case (backups but not immutable), worst case (backups destroyed).

Best Case

Immutable backups exist. Retention exceeds the attacker's dwell time. Clean restore point confirmed. Clean room recovery is possible.

Recovery: hours to days.
Data loss: minimal.
Ransom: not needed.

Middle Case

Backups exist but aren't immutable. Some may be compromised. Retention may be insufficient. Partial recovery possible.

Recovery: days to weeks.
Data loss: moderate.
Ransom: under discussion.

Worst Case

Backups destroyed or encrypted. No immutable copy. No offsite/air-gapped copy. No verified restore point.

Recovery: weeks to months.
Data loss: catastrophic.
Ransom: may be only option.

The difference between these three columns is one thing: immutable backups with sufficient retention. That single architectural decision — made weeks or months before this moment — determines which column you're in right now.

Clean Room Recovery

Never restore directly to production. The attacker may have planted persistence mechanisms — backdoors, scheduled tasks, compromised service accounts — that survive a simple restore. You need a clean room.

Clean room recovery steps
Six steps: isolated environment, verify integrity, restore, validate, harden, monitor.
  1. Build an isolated recovery environment. Separate VLAN, no connectivity to production. Fresh vCenter. Fresh AD forest. No domain trust.
  2. Verify backup integrity before restore. Check immutability timestamps. Verify no encryption reached the backup chain. Validate checksums.
  3. Restore to the clean room first. Power on. Let the IR team scan for IOCs, persistence mechanisms, and backdoors.
  4. Validate application functionality in isolation. Smoke tests. Data integrity checks. Confirm no ransomware artifacts.
  5. Harden before reconnecting. Reset ALL credentials — every password, every service account, every API key. Patch the exploited vulnerability. Enable MFA. Segment the network. THEN connect to production.
  6. Monitor aggressively for 30+ days. EDR on every endpoint. 24/7 SOC monitoring. 69% of organizations that paid ransom were attacked again. The attacker may try to come back.

The Ransom Decision

This is a business decision, not a technical one. The IC decides with legal and insurance input. Here are the factors:

Arguments for and against paying the ransom
Arguments against paying vs. factors that complicate.

Arguments against paying: you have verified backups and can recover; 69% of payers were hit again; payment funds criminal operations; no guarantee the attacker deletes exfiltrated data; no guarantee decryption keys work; average payment is $1M and you still spend $1.53M on recovery anyway.

Factors that complicate: no usable backups exist; the business will cease without the data; exfiltrated data creates existential legal risk; lives are at risk (healthcare, critical infrastructure); insurance recommends payment; rebuild cost far exceeds ransom.

Which column you're in was determined by the backup architecture decisions you made months ago. The organizations in the "don't pay" column invested in immutability. The organizations in the "may have to pay" column didn't.

The Recovery Timeline

Here's the realistic timeline from detection to full recovery:

Recovery timeline from 0-15 minutes through 30-90 days
Eight phases from 0-15 minutes through 30-90 days.
0-15 minIsolateDisconnect. Preserve. Don't power off.
15 min-1 hrContain & AssessBlast radius. Backup check. Insurance call.
1-4 hrsInvestigateDwell time. Attack vector. Clean RPO identified.
4-12 hrsRecovery DecisionPay vs. restore. Legal, insurance, IC alignment.
12-48 hrsClean Room RestoreIsolated environment. Verify. Validate. Harden.
48 hrs-7 daysProduction RecoveryReconnect validated systems. DNS cutover. Monitor.
7-30 daysStabilize & HardenEnhanced monitoring. Credential reset. Patch. MFA.
30-90 daysLessons LearnedPost-incident review. Architecture changes. Testing.

This is the realistic timeline. Not the vendor marketing timeline. Twenty-four days average for a reason.

How to Never Be Here Again

Six things that prevent you from ever being in this situation again:

Six steps to prevent future ransomware
3-2-1-1-0 backups, credential segmentation, MFA, tabletops, immutability testing, assume breach.
  1. Implement 3-2-1-1-0 backups. Three copies, two media, one offsite, one immutable, zero errors. The immutable copy is the one that saves you. Hardened Linux repo, S3 Object Lock, or tape offsite.
  2. Segment backup credentials. If your backup admin accounts are in Active Directory, an attacker with domain admin can delete every backup. Use local accounts or a separate auth system.
  3. Enable MFA on everything. VPN, vCenter, email, backup console, cloud accounts. MFA blocks 82% of credential-based attacks for under $10K/year.
  4. Run ransomware tabletop exercises. Quarterly. With the full team. The team that's practiced doesn't panic.
  5. Test your immutable backups monthly. Try to delete a test backup from the immutable repo. It should fail. If it succeeds, fix it immediately.
  6. Assume breach. Plan accordingly. Zero trust. Network segmentation. Least privilege. The question isn't whether you'll be attacked — it's whether you'll survive it.

The best time to prepare was six months ago. The second best time is right now. Every item on this list can be started today. Immutable backups can be configured this week. MFA can be enabled this afternoon. A tabletop can be scheduled for next month. Don't wait for the worst day.

Watch the Video

Ransomware Hit. What Now? — the full incident response guide in 5 minutes.

Don't Wait for the Worst Day

Shift7 Consulting offers DR and cyber resilience assessments. We'll tell you exactly where your gaps are — including your immutability posture — before an attacker finds them for you.

Request a Cyber Resilience Assessment
7 SHIFT7 CONSULTING

Nate Sellers is a Principal Consultant at Shift7 Consulting LLC, specializing in enterprise infrastructure strategy, cloud architecture, and cyber resilience. 20+ years in enterprise infrastructure and disaster recovery.

contact@shift7az.com · (480) 243-5793 · shift7az.com