Where do we draw the line?

What Makes them different from us?
Their are many rules and barriers that must be taken into consideration with concepts of artificial intelligence and their potentials. The first is understanding what characteristics make up humans to be as unique as they are.

Where do we draw the line?
Characteristics such as creativity and rationality are made up of rules, that themselves are barriers for artificial intelligence entities. These characteristics must co-exist within the entity with no dependency on others. Creativity itself is theoretically impossible to replicate in terms of AI since it is a social act and construction that takes place within a complicated social web, thus creative autonomy must be met for the definition of Artificial Intelligence. Creative autonomy is composed of three rules simply put:

“Autonomous Evaluation—the system can evaluate its liking and schema of a creation without seeking opinions from an outside source.

Autonomous Change—the system initiates and guides changes to its standards without being explicitly directed when and how to do so.

Non-Randomness—the system’s evaluations and standard changes are not purely random.”

For arguments sake, let's imagine we’re in the future where a kind of artificial entity has been created and possesses specific human characteristics and obeys the rules of robotics. Once these entities reach a human level of intelligence, the growth of technology would continue to increase exponentially, and would eventually allow these entities to surpass their creators. As the distance between creator and creation increases, man will have to adapt in order keep up. Humans will have to integrate artificial parts into there brains and/or bodies in order to solve this problem.

This leads us into a next dilemma, where do we draw the line? Should it considered and new species, which entitles them to certain laws. “ You probably would still say that an enhanced person was still human. But what if they decided that the advantages were so great that they had the all thinking parts of their natural human brains completely replaced with faster and more efficient artificial circuitry while maintaining the original body and personality of the individual. Would it still be the same person? Would the person still be human?” (Don Brandes)

If the answer is yes, that means the same forces and laws we abide by would be expected upon this kind of entity. Should this being even have rights? Of course, but should they be the same? Could they be expected to follow a hybrid of laws and rights system with the foundation being a crossing of Asimov’s rules of robotics and human rights.

The same rights, the right to vote, the right to marriage would be the same rights to technologically advanced beings. In addition, following the standards for robotics in which  a robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Or should this new species be stripped of all rights, which could be argued ethically wrong since even a rat would have more rights.