Cole's Notes

A Simple Blog Powered by Go

If It’s Impossible to Do Harm, Harm Cannot Thrive

Posted by cole on Feb 9, 2026 20:13

Why Accessibility and Care Are Core Tenets of Ethical Software Engineering

There is a common myth in software engineering that security and ethics are about stopping bad people.

They are not.

They are about removing the conditions that allow harm to propagate.

I learned this long before I ever worked with code. It came from my grandmother, who taught me (in so few words):

If there is nothing to steal, you cannot be a thief.
If I have already given you what you need, how could you take it from me?
If I am prepared for loss, guarded against harm, and make it impossible for your bad choices to negatively affect you, then I have protected you — even from yourself.

At the time, it sounded like moral guidance.
Later, I realized it was systems thinking.


Harm is rarely caused by individuals alone

Most harm does not come from malicious people.
It comes from fragile systems:

  • systems that concentrate power
  • systems that assume good behavior
  • systems that rely on secrecy instead of structure
  • systems where failure cascades and scarcity is weaponized

When we design software that assumes:

  • users will behave correctly
  • environments will be trusted
  • access will not be abused
  • edge cases "won't happen"

we are not being optimistic — we are being negligent.


Ethical software is not about punishment

It's about architecture

There is a profound difference between:

  • "We will punish misuse." and
  • "Misuse cannot meaningfully cause harm."

The first relies on surveillance, enforcement, and reaction.
The second relies on design.

Architecture scales. Enforcement does not.

The most ethical systems I've studied — in security, accessibility, distributed systems, and even game design — all converge on the same idea:

Do not try to stop people from being imperfect.
Design systems where imperfection is safe.


Accessibility is a security concern

Security is a care concern

Accessibility is often framed as a compliance checkbox.
Security is often framed as an adversarial arms race.

Both framings miss the point.

When a system is inaccessible:

  • people are locked out of participation
  • workarounds become necessary
  • power concentrates in those who can navigate it
  • resentment grows
  • harm becomes attractive

When a system is hostile by default:

  • users are treated as suspects
  • mistakes are punished
  • failure becomes catastrophic
  • people disengage or lash out

Care and accessibility reduce harm at the root.

They remove the need for theft, circumvention, and coercion by eliminating artificial scarcity and fragility.


The lesson from security engineering

In modern security thinking, there is a hard-won truth:

You cannot stop a determined attacker from observing a system.
You can prevent them from gaining authority.

The best systems do not rely on secrecy or trust.
They rely on invariants.

Even if someone:

  • inspects memory
  • manipulates inputs
  • behaves adversarially

…the system does not give them more power than they were explicitly granted.

This same philosophy applies to social harm.

If:

  • exploitation does not grant advantage
  • cruelty does not amplify power
  • theft does not improve outcomes
  • failure does not cascade

Then harmful behavior withers — not because people are perfect, but because the system no longer rewards it.


Designing for care is not naïve

It is rigorous

Designing systems that are:

  • accessible
  • fault-tolerant
  • resilient
  • explicit in authority
  • safe under failure

is harder than building punitive systems.

It requires:

  • thinking adversarially without becoming cynical
  • modeling worst-case behavior without assuming bad intent
  • accepting that humans are fallible and designing accordingly

This is not "soft" engineering.

It is deeply disciplined engineering.


What ethical software engineering actually asks of us

Ethical engineering does not ask:

  • "How do we stop bad actors?"
  • "How do we enforce compliance?"
  • "How do we lock things down harder?"

It asks:

  • "What harm is possible here?"
  • "What incentives are we creating?"
  • "What happens when someone fails?"
  • "Who is excluded by this design?"
  • "What authority is implicit rather than explicit?"

And perhaps the hardest question of all:

If someone acts badly inside this system, does the system amplify the harm — or absorb it?


A quieter goal worth striving for

I care about building systems that are:

  • impossible to meaningfully exploit
  • impossible to catastrophically fail
  • structured enough that harm cannot propagate
  • generous enough that theft is unnecessary
  • resilient enough that mistakes are survivable

Not because people are always good —
but because a good system does not require people to be perfect in order to be safe.

That belief didn't come from a textbook.
It came from someone who understood that care, when made structural, becomes protection.

And that is why accessibility and care are not "nice to have" in software engineering.

They are moral and ethical foundations.

← Back to posts