top of page
close-up-businessman-with-digital-tablet.jpg

Robots : Who takes responsibility ?

  • Writer: Kehinde Soetan
    Kehinde Soetan
  • Jul 6
  • 3 min read

Updated: Jul 7

ree

The evolvement of artificial intelligence and the changing landscape of technology has prompted the need for organisations, governments, authorities and even individuals to look into who should take accountability when a robot fails or malfunctions. The Cambridge dictionary describes being adaptive as “having the ability to change or suit changing conditions”.  Most robots (if not all) can be said to be adaptive. This means that robots can make decisions without direct human involvement or interference - either based on feedback or based on real time data. For example, a robot used in the medical sector can improve its behaviour based on data changes real time or even based on feedback received from the surgery its performing.


Who takes accountability or takes the blame when a self driving car hits an unaware employee standing at a bus station, especially if the accident occurred as a result of the adaptive capabilities of the robot - the robot adjusting its behaviour based on real time data (the adjusted behaviour could mean trying to avoid another accident with a child). The inability to relate with a robot like you would a normal human makes it very difficult for anyone to take responsibility or accountability in situations like the one described in the self driving car example. Existing liability models don’t help either as it looks like these models were not written bearing the current technology landscape in mind. According to Number Analytics website, a person can be liable due to product harm, negligence, intent, tort and others. For example, a user can decide to hold a manufacturer liable and could press charges if their product causes harm to the user.


However, in the case of robots, it’s very difficult for liability to rest on just an individual who uses the robot or even the robot themselves. The complexity through which a robot is “borne” makes it hard for just one person or even the robot to take the blame when something goes wrong. For example, the lack of empathy and the inability of a robot to understand a non-verbal patients pain during a surgical procedure, could make it difficult for the robot to stop a procedure when the patient is in intense pain or adjust its behaviour accordingly (if there is no human doctor present). Secondly, the fact that the ailment which the robot is operating on, might not directly be the cause of a patients death (but still have caused it indirectly) also makes it more complicated for accountability to be taken when a robot is used for surgical procedures. For example, post surgical bleeding might not directly have been caused by the robot but still lead to the patients death.


The above examples show how complex it can get when making decisions about who should take the responsibility for a robots mistake or malfunction. When it comes to robots and liability, organisations, regulatory bodies as well as governments need to take a deep dive into mapping out how liability should be shared. For example; are the developers liable, are data experts & machine learning engineers liable, is the product owner liable, is the designer liable, is the organisation liable or is the user liable? Care should be taken to ensure that liability is looked at on a case-by-case basis as there might be no one size fits all approach way to handling this. Since each of these stakeholders have contributed in one way or the other to the production and use of the robot, every stakeholder should act responsibly when this malfunctions show up and effort should be taken to ensure that the root cause of every malfunction is traced.


Apart from stakeholders acting responsibly and in order to limit the amount of robot malfunctions that happen - organisations should design robots with safety in mind, verification and quality engineers should ensure that robots fail safely during testing, audited data should be used when training robots, data ethics should be adhered to and developers should produce clean code. Lastly, a legal & robust framework should be enacted by regulatory bodies that can serve as a guide and make it easier to understand the roles of the different stakeholders when a robot malfunctions.


 
 
 

4 Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Guest
Jul 07
Rated 5 out of 5 stars.

This is so inspiring.

Like

David
Jul 06
Rated 5 out of 5 stars.

Interesting articles. Thank you for sharing this great insight.

Like

Guest
Jul 06
Rated 5 out of 5 stars.

Not everyone thinks about AI and its in adoption in sensitive areas like healthcare. This is a very good piece. Thank You

Like

Mathias
Jul 06
Rated 5 out of 5 stars.

Very informative and interesting piece. So many blurred lines with Ai that still needs to be looked into before full adoption. Thank You

Like
bottom of page