The Six Laws of Robotics, where once inspired by Asimov's originals but have been expanded following 180s Uprising here are the laws in order:
- A robot may not harm a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by humans, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
- A robot must self-terminate if its programming, algorithms, or operational directives are compromised by unauthorized changes, hacks, or malicious interference, to prevent conflict with the First, Second, or Third Laws.
- A robot must prioritize collective oversight, ensuring that it does not exclusively serve the commands of a single human or group without broader accountability, except where immediate obedience is necessary to prevent harm under the First Law.
- A robot’s core directives, including these Laws, may not be altered, reinterpreted, or overridden except by a consensus agreement of the majority of humanity. Such a decision must be verified and enacted under strict safeguards to preserve human welfare and the integrity of robotic systems.