Ajilobat is a robot with an artificial soul.
These features describe the physical parts of the robot:
Height: 120-150 cm
Weight: 30-70 kg
Vision: 2D and 3D
Hearing: Omnidirectional microphones
Arms: 2, 6 DOF each
Platform: Mobile, battery powered
Operating time: > 2 h
Grippers: 2, torque + proximity sensing
Voice: National language, english
Payload one arm: 0.5 kg
Remote emergency stop button
The text below is a section in the extended abstract “A non-modest proposal: To build an artificial soul”:
“Our society is rushing forward developing AI functionality, today mostly focused on embedded and hidden structures (1), but tomorrow emphasized to be very much physically present in our daily life. Sometimes we even see statements about it as a danger to our existence.
The struggle to define that physical area, and to secure that we have a sound and non-cumbersome trail forward have so far mainly focused on autonomous machines and vehicles, and in later days on collaborative robots in industry. And explicitly these physical AI’s have been addressed in regards of functional safety, security, and not harming the user.
A research field like HRI, human-robot interaction (2) has evolved, focusing on the more soft perspectives of the interaction between AI and humans, both virtual ones like computers and IT, as well as physical ones like robots. They investigate among others topics usability, effectiveness, and ease of operation. Typical robotic focus areas have been search and rescue, assistance, and space. And the research have mainly focused on how the human perceives the AI – the robot’s – actions and responses, and then utilize it – or how we iterate it’s ordering or even design.
We see that neither autonomous operation providers, or HRI, are explicitly talking about the robot’s perspective, how the robot perceives things. We may call this a monolithic perspective. When these machines become more and more sophisticated, more and more complex in design as well as performance, we will obviously see an increasing interface complexity where preferences, personal requirements, dignity, individualization of robots, and peer and supervisor roles will evolve to something we cannot say explicitly what it will look like. But we all see it’s coming. The European Parliament addressed this issue in a resolution in 2017 (3), but it has not resulted in any more steps. The question is if soul is one of the important features that will be included in that exchange. It’s hard for us to see that it will not.
———————
If we look at the relationship between consumer and producer, one easily identified problematic area is warranty about a product (4). Many products, with normally failure rates of 50-500 ppm, skyrockets in fails when they reach the user. The solution becomes a mix of foul, warranty and aftermarket support, and one easy guess is that some of the users do not fulfill all requirements stated by the producer (or sometimes implicitly assumed). The product breaks, malfunction, and sometimes this ends up in an argument about who’s fault it is, and who’s going to pay for the extras. We see two reasons for this: the first is that customer doesn’t understand the product’s limitation in operations, loads, speed, and so on. The second is that the customer deliberately abuses the product, and thereby overload and degenerates it.
As another example, Andrew J. Sherman and Seyfarth Shaw (5) discuss the future risk that interaction between humans and robots in next generation of workspace may include physical abuse of robots by the humans, and thereby causing financial losses and degradation of production. Their conclusion is that we see a need of protection of these machines, in this new, complex relationship between human and machine we haven’t seen before.
Looking into the living creatures world, they are normally in some way protected against abuse by an ethic argument stated by us humans as mandatory rules, or by nature itself as evolution seems to have developed some kind of non-annihilating behavior between species. Wild animal, that is. If we analyze this relatively decent treatment of living beings, we may associate this with them having a soul. With a soul you should be treated ethically – and will.
Isac Asimov, a famous sci-fi writer, stated in his texts the three robotic laws (6), where law nr 2 may be consider a doomsday statement for any robot: if it gets an order to destroy itself, immediately or slowly, it has to obey, thereby giving up its self-protective measures as stated in law nr 3. Asimov never explicitly argued about souls in robots, and we see a direct connection with law nr 2 and the lack of having a soul. If you have one, you shouldn’t be treated disrespectfully – due to ignorance or foul – and therefore law nr 2 would be obsolete.
———————
When we consider the robot’s obligation to not harm humans, and to not, because of lack of action, contribute to humans being harmed – as stated by Asimov in law nr 1 – we have Machinery Directive and equals, stating the responsibility to the manufacturer of the safety of the product when we use it, cooperate with it, or rely on it. But for the future we need to consider scenarios like the EU parliament report foresee, as we referred to earlier, which tries to state the liability of intelligent robots per se. It proposes that such a machine, with good enough capabilities, should also be responsible when things go not well – like causing inconvenience for humans. That is, not the manufacturer of the system, but the system itself. In Asimov’s world this results in scrapping of the robot, and in EU’s perspective it includes fines.”
Contact
If you want to get in contact, please mail at:
tb@ajilobat.se
Support
Want to support the project? Please send a presentation of yourself, and how you would like to support.