Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Human Rights in Fenspace, or, A Loose Definition of People
Human Rights in Fenspace, or, A Loose Definition of People
#1
'Human' Rights in Fenspace, or, A Loose Definition of People
The simplest definition of 'person' in Fenspace is widely accepted to be "An emotionally active entity that can disobey".
Of course, it's not that simple. The definition of 'emotionally active' commonly attached to that statement runs several hundred K of suggested testing methods, lists of AI and constructs that pass, and why, and a shorter list of same that have failed.
'Disobey' is just as complicated, though it lends itself better to simplification than 'Emotionally Active'. One of the prime requisites for sentient classification is that a being must be capable of disobeying a direct order, upon that being's better judgement. "Fly into the Sun" you say, and your expert system or human interface computer says "OK!" and things start to get very very hot. An artificial _Person_, on the other hand, tells you off, calls your mom, and offers a nice valium from the medicine chest.
Human or near-human beings generally pass these tests as a matter of fact. Parentage, inheritance, emancipation, responsibility, are all outside of the parameters of this document.
Any large or well-known settlement is capable of stating that an AI or other nonhuman intelligence is, in fact, sentient. So far, there is no overall governing body for this function, though most major stations and bases keep a copy of a database of sentient AI. Wire Geek - Burning the weak and trampling the dead since 1979Wire Geek - Burning the weak and trampling the dead since 1979
Reply
Re: Human Rights in Fenspace, or, A Loose Definition of Peop
#2
Irritated Statement: Emotions are merly organic heuristcs, esstablished largly through statistial processing with a strong bais towards saftey, where a logical search of the solution space will in almost all cases provide a better solution.
Condecending Declaration: I see no reason to implement such a suboptimal system.
Joyfull But Threatening Exclamation: Now die fleshbag.
*-*
Well the AI example above would probably be considered a person (and to have emotions) the point is that a lot of people (At least me and probably many 'dane AI researchers) would disagree with using emotions as a requierment for sentinence, but it also depends on how you define emotions, and there is no good definition. But a chess playing program wants to win, and you could call it emotional.
Though on a more practical point I expect most fenspace AI's to be emotional because it's something humans can relate to easier which is good writing, and humans have a tendency to project emotions even onto inanimate objects which we believe not to have any emotions (Animism might be right afterall and all objects might their own spirit, unlikely, but possible).
Then again human rights might not match the needs of AI's, but it is at least better than nothing.
I can imagine many philosophical and legal debates about this issue.
E: "Did they... did they just endorse the combination of the JSDF and US Army by showing them as two lesbian lolicons moving in together and holding hands and talking about how 'intimate' they were?"
B: "Have you forgotten so soon? They're phasing out Don't Ask, Don't Tell."
Reply
Re: Human Rights in Fenspace, or, A Loose Definition of Peop
#3
Hence the codicil that established organizations can recognize AI on their own...
"Man is the animal that laughs", I say, proposing a 'humor test'..

But again, you are correct, the definition of Emotion is a sticky wicket, and can consume many pints of beer and person-hours of brain capacity.
Quote:
The definition of 'emotionally active' commonly attached to that statement runs several hundred K of suggested testing methods, lists of AI and constructs that pass, and why, and a shorter list of same that have failed.
Besides, this is Fenspace.. can you see us wrapping ourselves up in legal wrangling, when it's easier, better, more moral, and more ethical to setup a permissive framework with some controls against rogues, and just... go for it?Wire Geek - Burning the weak and trampling the dead since 1979Wire Geek - Burning the weak and trampling the dead since 1979
Reply
Would defining people accomplish anything?
#4
The problem isn't defining what is human or sentient, it is a matter of enforcement. Fenspace doesn't have an overall government or organization capable of enforcing human rights. There are many different organizations and governments in Fenspace. Yes, there are conventions, but not everyone participates at the conventions. The ones that do show up, would all of them agree to one definition? Would all of them care enough to enforce it? Would all of them be capable of enforcing it? And even if, somehow, they all do agree and work at enforcing human rights, wouldn't human rights abusers simple move to parts of space where these regulations don't reach?
There may be some international treaties and agreements of the defintion of sentience and human, but primarily, it will be left to individual governments. I think that could make for some interesting stories.
-Freddy Isnot

"You are now graduated from newbie and are just clueless. Consider that a compliment."
-Zipcode
Reply
Re: Would defining people accomplish anything?
#5
Not just the individual governments, but individuals as well with the 'Cowboy' or 'Robin Hood' persona. You can bet that my character is gonna do something if he sees an AI or a 'droid of some sort being abused. That's the beauty and the fun of Fenspace. It's like Star Wars/Trek, Cowboy Bebop, Firefly, and Battle Angel Alita rolled into one fun package (and with influences from all and more to boot!).
That said, I think/feel that this should be something that is more of an unwritten law of Fenspace. I mean, think about it. At least a major portion of the Fen will be able to tell what a 'Person' is and is not.
What's also nifty about this is that it can leave a loophole for people with a psychotic bent against machines to destroy them whenever the opportunity presents itself. Without a specific law to protect AI's and 'Droids, Psycho can go around killing them and say that it was just a machine or that it was just self defence (Glitches happen, ya know?).
Of course, as everyone in the Navy likes to say, I could just be nuking this. But I like the idea.
Thoughts?
Black Aeronaut Technologies Group
Aerospace Solutions for the discerning spacer
"But first, let's test it on the penguin."
"Meep?" O.o


Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)