Header Logo
About Contact Stop the Glitch
← Back to all posts

🚨The Drone Already Decided Who Dies β€” And No Human Pushed the Button. 5 Steps Before Autonomous Weapons Become Normal

Jan 29, 2026
Connect

🚨A Necessary Note Before We Begin

This newsletter is written for education, awareness, and responsible citizenship β€” not to promote fear or undermine legitimate security operations.

I have tremendous respect for the military, law enforcement agencies, intelligence professionals, and government personnel who stand between us and chaos every single day. Because of them, we can voice opinions freely, disagree openly about policies, and live with the level of security we often take for granted.

This series exists for one purpose only: πŸ›‘οΈ To educate citizens on emerging military and law enforcement technologies so they can engage in informed civic debate about their use, oversight, and ethical boundaries.

Understanding autonomous weapons isn't about undermining security β€” it's about ensuring the technology that protects us doesn't outpace the safeguards that keep it accountable.

Always obey the law. Always respect legitimate authority. Always prioritize safety.

Now β€” let's examine what happens when machines start making decisions humans used to make.


REALITY BREACH

Libya, 2020.

A soldier runs. No cover. Just open desert and the sound of his own breathing.

He doesn't hear the drone. Most people don't.

It's already decided he's a target. The algorithm analyzed his thermal signature. Matched his movement pattern. Calculated probability of threat. Locked coordinates.

No pilot reviewed the decision. No commander gave clearance. No human finger touched a trigger.

The drone just... acted.

By the time the UN documented it, the weapon had already moved on. Hunting. Autonomously. In "fire, forget, and find" mode.

This wasn't a movie. This was the STM Kargu-2. A loitering munition with onboard AI. Real combat zone. Real casualties.

And here's what should terrify you: it wasn't the first time. And it definitely won't be the last.


A Short Story β€” Because This Is How It Feels

(Fictionalized composite based on documented military AI deployment patterns and emerging law enforcement technology trends. No graphic violence.)

A defense analyst sits in a briefing room, watching footage from a conflict zone overseas.

The screen shows thermal signatures moving across terrain. Dots. Shapes. Data.

Someone asks: "Who approved the strike?"

Silence.

Not because the answer is classified. Because there isn't one.

The system identified. The system tracked. The system engaged.

No malfunction. No override. Just... execution.

Later, a different analyst reviews the incident. Tries to trace the decision pathway. Pulls logs. Checks algorithms.

The machine did exactly what it was designed to do: eliminate threats without waiting for human confirmation.

That's when the weight hits.

This isn't about one drone in Libya. It's about what comes next.

Five years later, a police chief in a mid-sized American city sits in a similar briefing room.

A vendor pitches "next-generation autonomous patrol systems."

"Reduces response time by 90%," they say. "Eliminates officer bias in threat assessment."

The chief thinks about budget cuts. Officer shortages. The shooting last month that went viral.

The word "autonomous" doesn't sound dangerous anymore.

It sounds efficient.

And suddenly, the question isn't if these systems come home.

It's when.


⚠️ REALITY CHECK

The "Sci-Fi" Defense

Fiction: "Killer robots are decades away. This is movie stuff, not real life."

Reality: In 2020, AI-powered drones hunted and killed targets in Libya without human operators making the engagement decision. The UN documented it. Multiple defense contractors sell similar systems. They're already deployed.

The gap: The future arrived while you were still debating whether it was possible.


The "Humans Are Always in Control" Myth

Fiction: "There's always a person making the final call. Machines can't just decide to kill."

Reality: Loitering munitions like the STM Kargu-2 operate in "fire, forget, and find" mode β€” meaning they select, track, and strike targets using onboard AI, offline, with no data link to a human operator during engagement.

The gap: The human decision happened once: when they programmed "autonomous mode" and pressed launch. After that? The machine decides.


The "Only the Military Uses These" Delusion

Fiction: "Autonomous weapons are for battlefields overseas. They'll never be used domestically."

Reality: Military technology always migrates to civilian use. Drones. Facial recognition. Tactical gear. Even tasers started as specialized tools and became standard equipment. Armed drones are already legal for law enforcement in some U.S. states. Autonomous targeting exists. The gap between those two facts? It's shrinking daily.

The gap: What's deployed in Libya today could be patrolling American streets tomorrow. Once the technology exists, limiting its use becomes nearly impossible.


THE BREAKDOWN: What Lethal Autonomous Weapons Actually Are

Lethal Autonomous Weapons Systems (LAWS) β€” also called "killer drones," "loitering munitions," or "slaughterbots" β€” are weapons that can search for, identify, and engage targets without requiring human intervention for each strike decision.

Here's how they work:

STEP 1: PROGRAMMED PARAMETERS

Before launch, operators set mission parameters:

  • Geographic boundaries
  • Target profiles (thermal signatures, movement patterns, vehicle types)
  • Rules of engagement (what qualifies as a threat)

 

This is the last moment a human is directly involved.

STEP 2: AUTONOMOUS SEARCH

The weapon deploys and begins scanning the operational area using:

  • Onboard cameras
  • Thermal imaging
  • Object-recognition algorithms
  • Machine learning models trained to identify targets

 

It's not waiting for orders. It's hunting.

STEP 3: TARGET IDENTIFICATION

When the system detects something matching its programmed criteria:

  • AI processes visual/thermal data
  • Compares to known threat profiles
  • Calculates probability scores
  • Makes a classification: threat or non-threat

 

No human reviews this decision in real-time.

STEP 4: ENGAGEMENT DECISION

If the target meets the threshold:

  • The system locks coordinates
  • Calculates intercept path
  • Arms the warhead
  • Initiates strike

 

All of this happens in seconds. Offline. Autonomously.

STEP 5: STRIKE AND CONTINUATION

After engagement:

  • The system doesn't stop
  • It continues searching for additional targets
  • Repeats the cycle until mission parameters are met or fuel/ammunition is exhausted

 

This is what "loitering" means. It waits. It watches. It strikes. Then it moves on.


WHY THIS TRAP WORKS SO WELL

The danger isn't a Terminator scenario. It's something quieter and more insidious: normalization.

Think of it like this:

When you first heard about drones conducting airstrikes, it felt dystopian. Now? It's just "modern warfare."

When police departments started using facial recognition, it sparked debate. Now? It's standard procedure in many cities.

When tasers were introduced, people worried about misuse. Now? They're on every officer's belt.

Autonomous weapons are following the same path. And here's why that should concern every citizen:

1. The Political Cost of Using Force Drops

When you don't risk pilots, you lower the barrier to military action. When you don't even need a human to approve each strike? Force becomes a software deployment. Click. Launch. Forget.

Now imagine that same technology scaled down:

  • Police drones patrolling neighborhoods autonomously
  • Security systems that decide who's a threat and respond with force
  • Private security companies deploying autonomous "guard units"

 

Decisions that used to require deliberation now happen at machine speed.

2. From Battlefield to Main Street β€” Faster Than You Think

Military technology doesn't stay military.

  • Night vision goggles? Now on police tactical teams and civilian hunters.
  • Drones? Started military, now used by real estate agents and wedding photographers.
  • Facial recognition? Developed for counterterrorism, now in shopping malls and schools.

 

The STM Kargu-2 costs less than $100,000. That's affordable for:

  • Law enforcement agencies
  • Private security firms
  • Corporations protecting infrastructure
  • Anyone with a government contract

 

Once the technology exists, proliferation isn't a question of if β€” it's how fast.

3. "Concealed Carry" for Autonomous Systems

Imagine a future where autonomous weapons are as common as concealed carry permits:

  • A homeowner deploys a "perimeter defense drone" that autonomously identifies and neutralizes intruders
  • A private security company patrols a gated community with AI-powered surveillance drones authorized to use "non-lethal force"
  • Local police departments lease autonomous units for crowd control during protests

 

Sound far-fetched?

Armed drones are already legal in some U.S. jurisdictions for law enforcement use. Autonomous targeting is already deployed overseas. The gap between those two realities? It's closing.

4. Accountability Disappears Into Algorithms

When an autonomous system makes a mistake:

  • Who's responsible? The programmer? The department that deployed it? The manufacturer?
  • How do you audit an AI decision made in milliseconds with no human review?
  • What happens when the algorithm misidentifies a teenager with a phone as a threat with a weapon?

 

In military contexts abroad, these questions are theoretical.

In your neighborhood, they become lawsuits, protests, and lives destroyed.

5. The Normalization Cycle Repeats

Here's the pattern:

  1. "This technology is too dangerous for civilian use."
  2. "We'll only use it in extreme situations with strict oversight."
  3. "It's proven effective, so we're expanding the program."
  4. "It's standard procedure now. Why are you questioning it?"

 

Once autonomous weapons are normalized in warfare, the path to domestic deployment becomes inevitable.

The question isn't whether AI will decide who gets targeted.

The question is: Will we accept that decision when it happens on American streets?


DOCUMENTED CASES: When Autonomous Systems Crossed the Line

Case Study 1: Libya 2020 β€” The Kargu-2 Deployment

Source: UN Security Council Panel of Experts Report (2021)

During fighting in Libya, forces deployed STM Kargu-2 drones and other loitering munitions described as "lethal autonomous weapons systems."

Key findings:

  • Retreating forces were "hunted down and remotely engaged"
  • Systems operated in "fire, forget, and find" mode
  • No continuous data link to human operators during engagement
  • Targets were selected using onboard object-recognition and image-processing algorithms

 

The UN report didn't call it a war crime. It called it a deployment. That distinction matters.

This wasn't an experiment. It was operational use of AI-driven weapons in active combat.

Case Study 2: Azerbaijan-Armenia Conflict β€” Drone Swarms in Action

Source: Arms Control Association, Defense analysts (2020-2021)

Azerbaijan's use of loitering munitions and autonomous drone systems against Armenian forces showcased how rapidly these weapons are spreading.

What happened:

  • Relatively low-cost drones overwhelmed traditional air defenses
  • Some systems demonstrated autonomous target tracking
  • High casualty rates among forces without counter-drone capabilities
  • Set a precedent: autonomous weapons aren't just for superpowers anymore

 

Defense analysts noted: The psychological impact was as significant as the tactical one. Soldiers knew they could be targeted by machines that never tired, never hesitated, and never needed confirmation.

Case Study 3: The Phalanx System β€” When "Defensive" Becomes Autonomous

Source: U.S. Navy documentation, defense research papers

The U.S. Navy's Phalanx Close-In Weapon System (CIWS) has operated in "autonomous mode" for decades, automatically engaging incoming missiles and aircraft without human approval.

Why this matters:

  • Designed for defensive scenarios where human reaction time isn't fast enough
  • Proves autonomous weapons have been operational for years
  • Demonstrates mission creep: what starts as "defense only" expands over time
  • Once the technology exists, limiting its use becomes nearly impossible

 

The Phalanx doesn't hunt. But it decides. And that decision-making capability? It's now being integrated into offensive systems.

Case Study 4: U.S. Law Enforcement and Armed Drones β€” Already Legal

Source: Associated Press, ACLU, state legislative records

In 2015, North Dakota became the first U.S. state to legalize armed drones for law enforcement β€” initially restricted to "less-lethal" munitions like rubber bullets and tear gas.

Key developments:

  • Multiple states have considered or passed legislation allowing police drone use
  • Federal regulations permit law enforcement drone operations with proper authorization
  • Private security and corrections industries exploring autonomous surveillance systems
  • No federal law currently prohibits autonomous weapons in domestic law enforcement contexts

 

What defense analysts note: The legal and technological infrastructure for domestic autonomous weapons deployment already exists. What's missing isn't capability β€” it's the political will to deploy. And that changes fast during crises.

Why this matters for citizens:

The same "fire, forget, and find" capability used in Libya could theoretically be adapted for:

  • Border patrol operations
  • Riot control
  • High-risk warrant service
  • Perimeter security at critical infrastructure

 

The technology doesn't know the difference between a combat zone and a city street. Only the programming changes.


THE PATCH

5 Steps Before Autonomous Weapons Become Normal

STEP 1: Understand What "Autonomous" Actually Means

Not all drones are autonomous. Not all AI is lethal.

Learn the difference:

  • Remote-controlled drones: Human pilot makes every decision
  • Semi-autonomous systems: AI assists, human approves strikes
  • Fully autonomous weapons: AI selects and engages targets without real-time human control

 

Why this matters: The danger isn't drones. It's the removal of human judgment from the kill decision.

Educate yourself so you can identify which systems your government is developing, purchasing, or deploying.

STEP 2: Demand Transparency From Elected Officials

Your representatives vote on defense budgets that fund autonomous weapons research.

Ask them directly:

  • What autonomous weapons systems is our military developing?
  • What oversight exists for AI-driven targeting decisions?
  • What happens when an autonomous system kills the wrong person?

 

Make this a voting issue. If they can't explain the safeguards, they shouldn't be funding the technology.

STEP 3: Support Organizations Working on AI Weapons Governance

Groups advocating for meaningful human control over lethal systems:

  • Campaign to Stop Killer Robots (international coalition)
  • Future of Life Institute (AI safety research)
  • International Committee of the Red Cross (humanitarian law advocacy)

 

These organizations push for:

  • International treaties limiting autonomous weapons
  • Accountability frameworks for AI-driven strikes
  • Transparent testing and deployment standards

 

Even small donations or social media amplification helps.

STEP 4: Teach Digital Literacy β€” Especially to Young People

The next generation will live in a world where autonomous systems are normalized.

They need to understand:

  • How AI makes decisions (and where it fails)
  • Why human oversight matters in life-or-death contexts
  • That "the algorithm decided" is not an acceptable excuse

 

Start conversations now. Use current events. Ask questions like: "Should a machine be allowed to decide who dies? Why or why not?"

Critical thinking about AI is the best defense against its misuse.

STEP 5: Advocate for "Meaningful Human Control" Standards

The international debate centers on one principle: Meaningful Human Control (MHC).

This means:

  • Humans must understand how the system makes targeting decisions
  • Humans must be able to intervene before lethal force is used
  • Humans must be accountable for the outcomes

 

Push for:

  • Legislation requiring MHC in all military AI systems
  • International agreements banning fully autonomous weapons
  • Transparency in defense contracts involving AI targeting

 

This isn't about stopping technology. It's about ensuring humans β€” not algorithms β€” remain responsible for killing.


If It's Already Too Late: What to Do When Autonomous Weapons Are Deployed

If your country announces deployment of lethal autonomous weapons without adequate safeguards:

  1. Don't panic, but don't stay silent. Public pressure has stopped or delayed military programs before.
  2. Document and share. Screenshot announcements. Save policy documents. Spread credible reporting.
  3. Contact advocacy groups immediately. Organizations like Campaign to Stop Killer Robots coordinate rapid response efforts.
  4. Demand post-deployment audits. If systems are already operational, push for transparent after-action reviews and accountability mechanisms.

 

Bottom line: Once autonomous weapons are normalized, rolling them back is nearly impossible. The window to act is now.


SOURCES

United Nations Security Council. (2021). Final report of the Panel of Experts on Libya. S/2021/229. https://www.undocs.org/S/2021/229

Campaign to Stop Killer Robots. (2023). Country positions on autonomous weapons systems. https://www.stopkillerrobots.org

Arms Control Association. (2020). The role of autonomous weapons in the Nagorno-Karabakh conflict. https://www.armscontrol.org

Future of Life Institute. (2023). Autonomous weapons: An open letter from AI & robotics researchers. https://futureoflife.org/open-letter-autonomous-weapons

International Committee of the Red Cross. (2021). Autonomous weapon systems: Technical, military, legal and humanitarian aspects. https://www.icrc.org

Human Rights Watch. (2020). Stopping killer robots: Country positions on banning fully autonomous weapons. https://www.hrw.org

U.S. Navy. (Multiple publications). Phalanx Close-In Weapons System documentation. https://www.navy.mil


Stay Connected with the Mission

I am building a community where we turn misfortune into momentum. If you find this valuable, I'd love to see everyone over on my other platforms where I dive even deeper:

🌐 Happenstance LLC - Explore our safety & tech services: https://www.goteamhappenstance.com

Educational & Video Content πŸ“Ί YouTube - Watch my latest security breakdowns: https://www.youtube.com/@Dr.GermaineWalke Subscribe here for deep dives into the "Glitched Reality" and visual guides on protecting your physical and digital life. Glitched Reality videos and Tech News coming soon!

Professional Hub & Newsletters πŸ’Ό LinkedIn Hub: Follow me for daily insights Connect with me here for daily insights and to access the full archive of our Glitched Reality newsletters LinkedIn Profile: https://www.linkedin.com/in/iamdrwalker/


⚑ Reality is glitched. You're the patch. ⚑

#GlitchedReality #AutonomousWeapons #AIWarfare #FutureOfWarfare #KillerRobots #MilitaryAI #TechEthics #HappenstanceLLC

Responses

Join the conversation
t("newsletters.loading")
Loading...

Glitched Reality

From battlefield to cyber grid: veteran insights, gamer drive, and the discipline to Patch Often.
Footer Logo
Powered by Kajabi

JOIN THE ACTIVE SYSTEM

Turn your career glitch into momentum. Claim your seat in the free pilot before the initial sequence expires.