A robot must not injure a human being or, through inaction, allow a homosapiens to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must be programmed in a way such that it also must protect its own existence, as long as such protection does not conflict with the First or Second Law as mentioned.Robot ethics, also known by the expression "roboethics", deals with ethical problems such as whether robots pose a danger to humans in the long or short run, whether some uses of robots are problematic and how robots should be programmed such as they act ethically. Roboethics deals with ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.